2017-01-05 23:41:09 +03:00
|
|
|
/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
|
|
|
|
/* vim: set ts=8 sts=2 et sw=2 tw=80: */
|
2013-03-26 01:57:28 +04:00
|
|
|
/* This Source Code Form is subject to the terms of the Mozilla Public
|
|
|
|
* License, v. 2.0. If a copy of the MPL was not distributed with this
|
|
|
|
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
// There are three kinds of samples done by the profiler.
|
|
|
|
//
|
|
|
|
// - A "periodic" sample is the most complex kind. It is done in response to a
|
|
|
|
// timer while the profiler is active. It involves writing a stack trace plus
|
|
|
|
// a variety of other values (memory measurements, responsiveness
|
|
|
|
// measurements, markers, etc.) into the main ProfileBuffer. The sampling is
|
|
|
|
// done from off-thread, and so SuspendAndSampleAndResumeThread() is used to
|
|
|
|
// get the register values.
|
|
|
|
//
|
|
|
|
// - A "synchronous" sample is a simpler kind. It is done in response to an API
|
|
|
|
// call (profiler_get_backtrace()). It involves writing a stack trace and
|
|
|
|
// little else into a temporary ProfileBuffer, and wrapping that up in a
|
|
|
|
// ProfilerBacktrace that can be subsequently used in a marker. The sampling
|
|
|
|
// is done on-thread, and so Registers::SyncPopulate() is used to get the
|
|
|
|
// register values.
|
|
|
|
//
|
|
|
|
// - A "backtrace" sample is the simplest kind. It is done in response to an
|
|
|
|
// API call (profiler_suspend_and_sample_thread()). It involves getting a
|
2017-07-25 09:47:14 +03:00
|
|
|
// stack trace via a ProfilerStackCollector; it does not write to a
|
2017-06-19 02:38:15 +03:00
|
|
|
// ProfileBuffer. The sampling is done from off-thread, and so uses
|
|
|
|
// SuspendAndSampleAndResumeThread() to get the register values.
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
#include <algorithm>
|
2013-05-27 16:29:24 +04:00
|
|
|
#include <ostream>
|
2013-03-26 01:57:28 +04:00
|
|
|
#include <fstream>
|
|
|
|
#include <sstream>
|
|
|
|
#include <errno.h>
|
|
|
|
|
|
|
|
#include "platform.h"
|
|
|
|
#include "PlatformMacros.h"
|
2015-01-15 03:05:25 +03:00
|
|
|
#include "mozilla/ArrayUtils.h"
|
2017-02-08 03:12:26 +03:00
|
|
|
#include "mozilla/Atomics.h"
|
2015-06-18 08:05:42 +03:00
|
|
|
#include "mozilla/UniquePtr.h"
|
2017-02-07 07:22:27 +03:00
|
|
|
#include "mozilla/Vector.h"
|
2015-06-18 08:05:42 +03:00
|
|
|
#include "GeckoProfiler.h"
|
2017-05-08 04:43:41 +03:00
|
|
|
#include "GeckoProfilerReporter.h"
|
2015-06-18 08:05:42 +03:00
|
|
|
#include "ProfilerIOInterposeObserver.h"
|
2017-06-07 05:36:26 +03:00
|
|
|
#include "mozilla/AutoProfilerLabel.h"
|
2017-07-29 00:56:49 +03:00
|
|
|
#include "mozilla/Scheduler.h"
|
2017-02-09 01:02:41 +03:00
|
|
|
#include "mozilla/StackWalk.h"
|
2013-09-12 18:47:37 +04:00
|
|
|
#include "mozilla/StaticPtr.h"
|
2013-03-26 01:57:28 +04:00
|
|
|
#include "mozilla/ThreadLocal.h"
|
2015-06-17 05:28:00 +03:00
|
|
|
#include "mozilla/TimeStamp.h"
|
2017-02-07 06:24:39 +03:00
|
|
|
#include "mozilla/StaticPtr.h"
|
2017-02-07 08:09:39 +03:00
|
|
|
#include "ThreadInfo.h"
|
2017-02-09 09:04:51 +03:00
|
|
|
#include "nsIHttpProtocolHandler.h"
|
2013-03-26 01:57:28 +04:00
|
|
|
#include "nsIObserverService.h"
|
2017-02-09 09:04:51 +03:00
|
|
|
#include "nsIXULAppInfo.h"
|
|
|
|
#include "nsIXULRuntime.h"
|
2013-03-26 01:57:28 +04:00
|
|
|
#include "nsDirectoryServiceUtils.h"
|
|
|
|
#include "nsDirectoryServiceDefs.h"
|
2017-07-25 09:47:14 +03:00
|
|
|
#include "nsJSPrincipals.h"
|
2017-03-24 07:09:05 +03:00
|
|
|
#include "nsMemoryReporterManager.h"
|
2017-07-25 09:47:14 +03:00
|
|
|
#include "nsScriptSecurityManager.h"
|
2015-06-18 08:05:42 +03:00
|
|
|
#include "nsXULAppAPI.h"
|
2014-11-18 20:50:25 +03:00
|
|
|
#include "nsProfilerStartParams.h"
|
2017-05-30 22:06:14 +03:00
|
|
|
#include "ProfilerParent.h"
|
2013-03-26 01:57:28 +04:00
|
|
|
#include "mozilla/Services.h"
|
2013-03-29 23:34:49 +04:00
|
|
|
#include "nsThreadUtils.h"
|
2017-04-27 00:36:19 +03:00
|
|
|
#include "ProfilerMarkerPayload.h"
|
2017-02-09 09:04:51 +03:00
|
|
|
#include "shared-libraries.h"
|
2017-05-16 00:19:12 +03:00
|
|
|
#include "prdtoa.h"
|
2017-03-29 07:48:13 +03:00
|
|
|
#include "prtime.h"
|
2013-03-26 01:57:28 +04:00
|
|
|
|
2015-12-19 00:12:47 +03:00
|
|
|
#ifdef MOZ_TASK_TRACER
|
|
|
|
#include "GeckoTaskTracer.h"
|
|
|
|
#endif
|
|
|
|
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-02-15 08:25:22 +03:00
|
|
|
# include "FennecJNINatives.h"
|
|
|
|
# include "FennecJNIWrappers.h"
|
2015-09-18 00:17:26 +03:00
|
|
|
#endif
|
|
|
|
|
2017-07-22 05:04:10 +03:00
|
|
|
// Win32 builds always have frame pointers, so FramePointerStackWalk() always
|
|
|
|
// works.
|
|
|
|
#if defined(GP_PLAT_x86_windows)
|
2017-03-27 09:04:56 +03:00
|
|
|
# define HAVE_NATIVE_UNWIND
|
2017-07-22 05:04:10 +03:00
|
|
|
# define USE_FRAME_POINTER_STACK_WALK
|
2017-02-09 01:02:41 +03:00
|
|
|
#endif
|
|
|
|
|
2017-07-22 05:04:10 +03:00
|
|
|
// Win64 builds always omit frame pointers, so we use the slower
|
|
|
|
// MozStackWalk(), which works in that case.
|
|
|
|
#if defined(GP_PLAT_amd64_windows)
|
|
|
|
# define HAVE_NATIVE_UNWIND
|
|
|
|
# define USE_MOZ_STACK_WALK
|
|
|
|
#endif
|
|
|
|
|
|
|
|
// Mac builds only have frame pointers when MOZ_PROFILING is specified, so
|
|
|
|
// FramePointerStackWalk() only works in that case. We don't use MozStackWalk()
|
|
|
|
// on Mac.
|
|
|
|
#if defined(GP_OS_darwin) && defined(MOZ_PROFILING)
|
|
|
|
# define HAVE_NATIVE_UNWIND
|
|
|
|
# define USE_FRAME_POINTER_STACK_WALK
|
|
|
|
#endif
|
|
|
|
|
|
|
|
// Android builds use the ARM Exception Handling ABI to unwind.
|
2017-03-24 09:02:54 +03:00
|
|
|
#if defined(GP_PLAT_arm_android)
|
2017-03-27 09:04:56 +03:00
|
|
|
# define HAVE_NATIVE_UNWIND
|
2017-02-09 01:02:41 +03:00
|
|
|
# define USE_EHABI_STACKWALK
|
|
|
|
# include "EHABIStackWalk.h"
|
|
|
|
#endif
|
|
|
|
|
2017-07-22 05:04:10 +03:00
|
|
|
// Linux builds use LUL, which uses DWARF info to unwind stacks.
|
2017-11-13 09:23:50 +03:00
|
|
|
#if defined(GP_PLAT_amd64_linux) || defined(GP_PLAT_x86_linux) || \
|
|
|
|
defined(GP_PLAT_mips64_linux)
|
2017-03-27 09:04:56 +03:00
|
|
|
# define HAVE_NATIVE_UNWIND
|
2015-04-15 13:24:38 +03:00
|
|
|
# define USE_LUL_STACKWALK
|
2015-06-30 22:03:45 +03:00
|
|
|
# include "lul/LulMain.h"
|
|
|
|
# include "lul/platform-linux-lul.h"
|
2017-08-04 21:08:28 +03:00
|
|
|
|
|
|
|
// On linux we use LUL for periodic samples and synchronous samples, but we use
|
|
|
|
// FramePointerStackWalk for backtrace samples when MOZ_PROFILING is enabled.
|
|
|
|
// (See the comment at the top of the file for a definition of
|
|
|
|
// periodic/synchronous/backtrace.).
|
|
|
|
//
|
|
|
|
// FramePointerStackWalk can produce incomplete stacks when the current entry is
|
|
|
|
// in a shared library without framepointers, however LUL can take a long time
|
|
|
|
// to initialize, which is undesirable for consumers of
|
|
|
|
// profiler_suspend_and_sample_thread like the Background Hang Reporter.
|
|
|
|
# if defined(MOZ_PROFILING)
|
|
|
|
# define USE_FRAME_POINTER_STACK_WALK
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
|
|
|
|
// We can only stackwalk without expensive initialization on platforms which
|
|
|
|
// support FramePointerStackWalk or MozStackWalk. LUL Stackwalking requires
|
|
|
|
// initializing LUL, and EHABIStackWalk requires initializing EHABI, both of
|
|
|
|
// which can be expensive.
|
|
|
|
#if defined(USE_FRAME_POINTER_STACK_WALK) || defined(USE_MOZ_STACK_WALK)
|
|
|
|
# define HAVE_FASTINIT_NATIVE_UNWIND
|
2015-04-15 13:24:38 +03:00
|
|
|
#endif
|
|
|
|
|
2017-02-09 01:02:41 +03:00
|
|
|
#ifdef MOZ_VALGRIND
|
|
|
|
# include <valgrind/memcheck.h>
|
|
|
|
#else
|
|
|
|
# define VALGRIND_MAKE_MEM_DEFINED(_addr,_len) ((void)0)
|
|
|
|
#endif
|
|
|
|
|
2017-05-17 10:29:59 +03:00
|
|
|
#if defined(GP_OS_linux) || defined(GP_OS_android)
|
2017-02-09 01:02:41 +03:00
|
|
|
#include <ucontext.h>
|
|
|
|
#endif
|
|
|
|
|
2017-02-07 06:24:33 +03:00
|
|
|
using namespace mozilla;
|
|
|
|
|
2017-06-02 02:41:48 +03:00
|
|
|
LazyLogModule gProfilerLog("prof");
|
2017-03-14 08:49:12 +03:00
|
|
|
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-06-02 02:41:48 +03:00
|
|
|
class GeckoJavaSampler : public java::GeckoJavaSampler::Natives<GeckoJavaSampler>
|
2015-09-18 00:17:26 +03:00
|
|
|
{
|
|
|
|
private:
|
|
|
|
GeckoJavaSampler();
|
|
|
|
|
|
|
|
public:
|
|
|
|
static double GetProfilerTime() {
|
|
|
|
if (!profiler_is_active()) {
|
|
|
|
return 0.0;
|
|
|
|
}
|
|
|
|
return profiler_time();
|
|
|
|
};
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2017-06-02 02:41:48 +03:00
|
|
|
class PSMutex : public StaticMutex {};
|
2017-04-21 06:27:53 +03:00
|
|
|
|
2017-06-02 02:41:48 +03:00
|
|
|
typedef BaseAutoLock<PSMutex> PSAutoLock;
|
2017-04-21 06:27:53 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// Only functions that take a PSLockRef arg can access CorePS's and ActivePS's
|
|
|
|
// fields.
|
2017-04-21 06:27:53 +03:00
|
|
|
typedef const PSAutoLock& PSLockRef;
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
#define PS_GET(type_, name_) \
|
|
|
|
static type_ name_(PSLockRef) { return sInstance->m##name_; } \
|
|
|
|
|
2017-06-01 03:22:20 +03:00
|
|
|
#define PS_GET_LOCKLESS(type_, name_) \
|
|
|
|
static type_ name_() { return sInstance->m##name_; } \
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
#define PS_GET_AND_SET(type_, name_) \
|
|
|
|
PS_GET(type_, name_) \
|
|
|
|
static void Set##name_(PSLockRef, type_ a##name_) \
|
|
|
|
{ sInstance->m##name_ = a##name_; }
|
|
|
|
|
2017-05-29 22:16:34 +03:00
|
|
|
// All functions in this file can run on multiple threads unless they have an
|
|
|
|
// NS_IsMainThread() assertion.
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// This class contains the profiler's core global state, i.e. that which is
|
|
|
|
// valid even when the profiler is not active. Most profile operations can't do
|
|
|
|
// anything useful when this class is not instantiated, so we release-assert
|
|
|
|
// its non-nullness in all such operations.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
2017-06-01 03:22:20 +03:00
|
|
|
// Accesses to CorePS are guarded by gPSMutex. Getters and setters take a
|
2017-04-21 06:27:53 +03:00
|
|
|
// PSAutoLock reference as an argument as proof that the gPSMutex is currently
|
|
|
|
// locked. This makes it clear when gPSMutex is locked and helps avoid
|
|
|
|
// accidental unlocked accesses to global state. There are ways to circumvent
|
|
|
|
// this mechanism, but please don't do so without *very* good reason and a
|
|
|
|
// detailed explanation.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
2017-06-01 03:22:20 +03:00
|
|
|
// The exceptions to this rule:
|
|
|
|
//
|
|
|
|
// - mProcessStartTime, because it's immutable;
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
2017-06-01 03:22:20 +03:00
|
|
|
// - each thread's RacyThreadInfo object is accessible without locking via
|
|
|
|
// TLSInfo::RacyThreadInfo().
|
2017-04-21 06:28:23 +03:00
|
|
|
class CorePS
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
private:
|
|
|
|
CorePS()
|
2017-06-02 02:41:48 +03:00
|
|
|
: mProcessStartTime(TimeStamp::ProcessCreation())
|
2017-07-13 02:35:14 +03:00
|
|
|
#ifdef USE_LUL_STACKWALK
|
|
|
|
, mLul(nullptr)
|
|
|
|
#endif
|
2017-04-21 06:28:23 +03:00
|
|
|
{}
|
|
|
|
|
|
|
|
~CorePS()
|
|
|
|
{
|
2017-08-03 13:08:04 +03:00
|
|
|
while (!mLiveThreads.empty()) {
|
2017-04-21 06:28:23 +03:00
|
|
|
delete mLiveThreads.back();
|
|
|
|
mLiveThreads.pop_back();
|
|
|
|
}
|
|
|
|
|
2017-08-03 13:08:04 +03:00
|
|
|
while (!mDeadThreads.empty()) {
|
2017-04-21 06:28:23 +03:00
|
|
|
delete mDeadThreads.back();
|
|
|
|
mDeadThreads.pop_back();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
public:
|
|
|
|
typedef std::vector<ThreadInfo*> ThreadVector;
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
static void Create(PSLockRef aLock) { sInstance = new CorePS(); }
|
|
|
|
|
|
|
|
static void Destroy(PSLockRef aLock)
|
|
|
|
{
|
|
|
|
delete sInstance;
|
|
|
|
sInstance = nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Unlike ActivePS::Exists(), CorePS::Exists() can be called without gPSMutex
|
|
|
|
// being locked. This is because CorePS is instantiated so early on the main
|
|
|
|
// thread that we don't have to worry about it being racy.
|
|
|
|
static bool Exists() { return !!sInstance; }
|
|
|
|
|
2017-07-13 02:35:14 +03:00
|
|
|
static void AddSizeOf(PSLockRef, MallocSizeOf aMallocSizeOf,
|
|
|
|
size_t& aProfSize, size_t& aLulSize)
|
2017-04-21 06:28:23 +03:00
|
|
|
{
|
2017-07-13 02:35:14 +03:00
|
|
|
aProfSize += aMallocSizeOf(sInstance);
|
2017-04-21 06:28:23 +03:00
|
|
|
|
|
|
|
for (uint32_t i = 0; i < sInstance->mLiveThreads.size(); i++) {
|
2017-07-13 02:35:14 +03:00
|
|
|
aProfSize +=
|
2017-04-21 06:28:23 +03:00
|
|
|
sInstance->mLiveThreads.at(i)->SizeOfIncludingThis(aMallocSizeOf);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (uint32_t i = 0; i < sInstance->mDeadThreads.size(); i++) {
|
2017-07-13 02:35:14 +03:00
|
|
|
aProfSize +=
|
2017-04-21 06:28:23 +03:00
|
|
|
sInstance->mDeadThreads.at(i)->SizeOfIncludingThis(aMallocSizeOf);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Measurement of the following things may be added later if DMD finds it
|
|
|
|
// is worthwhile:
|
|
|
|
// - CorePS::mLiveThreads itself (its elements' children are measured
|
|
|
|
// above)
|
|
|
|
// - CorePS::mDeadThreads itself (ditto)
|
|
|
|
// - CorePS::mInterposeObserver
|
|
|
|
|
2017-07-13 02:35:14 +03:00
|
|
|
#if defined(USE_LUL_STACKWALK)
|
|
|
|
if (sInstance->mLul) {
|
|
|
|
aLulSize += sInstance->mLul->SizeOfIncludingThis(aMallocSizeOf);
|
|
|
|
}
|
|
|
|
#endif
|
2017-04-21 06:28:23 +03:00
|
|
|
}
|
|
|
|
|
2017-06-01 03:22:20 +03:00
|
|
|
// No PSLockRef is needed for this field because it's immutable.
|
|
|
|
PS_GET_LOCKLESS(TimeStamp, ProcessStartTime)
|
2017-04-21 06:28:23 +03:00
|
|
|
|
|
|
|
PS_GET(ThreadVector&, LiveThreads)
|
|
|
|
PS_GET(ThreadVector&, DeadThreads)
|
|
|
|
|
2017-07-13 02:35:14 +03:00
|
|
|
#ifdef USE_LUL_STACKWALK
|
|
|
|
static lul::LUL* Lul(PSLockRef) { return sInstance->mLul.get(); }
|
|
|
|
static void SetLul(PSLockRef, UniquePtr<lul::LUL> aLul)
|
|
|
|
{
|
|
|
|
sInstance->mLul = Move(aLul);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
private:
|
|
|
|
// The singleton instance
|
|
|
|
static CorePS* sInstance;
|
|
|
|
|
|
|
|
// The time that the process started.
|
2017-06-02 02:41:48 +03:00
|
|
|
const TimeStamp mProcessStartTime;
|
2017-04-21 06:28:23 +03:00
|
|
|
|
|
|
|
// Info on all the registered threads, both live and dead. ThreadIds in
|
|
|
|
// mLiveThreads are unique. ThreadIds in mDeadThreads may not be, because
|
|
|
|
// ThreadIds can be reused. IsBeingProfiled() is true for all ThreadInfos in
|
|
|
|
// mDeadThreads because we don't hold on to ThreadInfos for non-profiled dead
|
|
|
|
// threads.
|
|
|
|
ThreadVector mLiveThreads;
|
|
|
|
ThreadVector mDeadThreads;
|
2017-07-13 02:35:14 +03:00
|
|
|
|
|
|
|
#ifdef USE_LUL_STACKWALK
|
|
|
|
// LUL's state. Null prior to the first activation, non-null thereafter.
|
|
|
|
UniquePtr<lul::LUL> mLul;
|
|
|
|
#endif
|
2017-04-21 06:28:23 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
CorePS* CorePS::sInstance = nullptr;
|
|
|
|
|
|
|
|
class SamplerThread;
|
|
|
|
|
|
|
|
static SamplerThread*
|
|
|
|
NewSamplerThread(PSLockRef aLock, uint32_t aGeneration, double aInterval);
|
|
|
|
|
|
|
|
// This class contains the profiler's global state that is valid only when the
|
|
|
|
// profiler is active. When not instantiated, the profiler is inactive.
|
|
|
|
//
|
|
|
|
// Accesses to ActivePS are guarded by gPSMutex, in much the same fashion as
|
|
|
|
// CorePS.
|
|
|
|
//
|
|
|
|
class ActivePS
|
|
|
|
{
|
|
|
|
private:
|
2017-05-01 07:23:34 +03:00
|
|
|
static uint32_t AdjustFeatures(uint32_t aFeatures, uint32_t aFilterCount)
|
2017-04-21 06:28:23 +03:00
|
|
|
{
|
2017-05-01 07:23:34 +03:00
|
|
|
// Filter out any features unavailable in this platform/configuration.
|
|
|
|
aFeatures &= profiler_get_available_features();
|
|
|
|
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-06-02 02:41:48 +03:00
|
|
|
if (!jni::IsFennec()) {
|
2017-05-01 07:23:34 +03:00
|
|
|
aFeatures &= ~ProfilerFeature::Java;
|
2017-04-21 06:28:23 +03:00
|
|
|
}
|
2017-05-01 07:23:34 +03:00
|
|
|
#endif
|
|
|
|
|
|
|
|
// Always enable ProfilerFeature::Threads if we have a filter, because
|
|
|
|
// users sometimes ask to filter by a list of threads but forget to
|
|
|
|
// explicitly specify ProfilerFeature::Threads.
|
|
|
|
if (aFilterCount > 0) {
|
|
|
|
aFeatures |= ProfilerFeature::Threads;
|
|
|
|
}
|
|
|
|
|
|
|
|
return aFeatures;
|
2017-04-21 06:28:23 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
ActivePS(PSLockRef aLock, int aEntries, double aInterval,
|
2017-05-01 07:23:34 +03:00
|
|
|
uint32_t aFeatures, const char** aFilters, uint32_t aFilterCount)
|
2017-04-21 06:28:23 +03:00
|
|
|
: mGeneration(sNextGeneration++)
|
|
|
|
, mEntries(aEntries)
|
|
|
|
, mInterval(aInterval)
|
2017-05-01 07:23:34 +03:00
|
|
|
, mFeatures(AdjustFeatures(aFeatures, aFilterCount))
|
2017-06-16 02:43:16 +03:00
|
|
|
, mBuffer(MakeUnique<ProfileBuffer>(aEntries))
|
2017-04-21 06:28:23 +03:00
|
|
|
// The new sampler thread doesn't start sampling immediately because the
|
|
|
|
// main loop within Run() is blocked until this function's caller unlocks
|
|
|
|
// gPSMutex.
|
|
|
|
, mSamplerThread(NewSamplerThread(aLock, mGeneration, aInterval))
|
2017-05-01 07:23:34 +03:00
|
|
|
, mInterposeObserver(ProfilerFeature::HasMainThreadIO(aFeatures)
|
2017-06-02 02:41:48 +03:00
|
|
|
? new ProfilerIOInterposeObserver()
|
2017-04-21 06:28:23 +03:00
|
|
|
: nullptr)
|
2017-05-01 07:23:34 +03:00
|
|
|
#undef HAS_FEATURE
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
, mIsPaused(false)
|
2017-04-21 06:27:59 +03:00
|
|
|
#if defined(GP_OS_linux)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
, mWasPaused(false)
|
|
|
|
#endif
|
2017-04-21 06:28:23 +03:00
|
|
|
{
|
|
|
|
// Deep copy aFilters.
|
|
|
|
MOZ_ALWAYS_TRUE(mFilters.resize(aFilterCount));
|
|
|
|
for (uint32_t i = 0; i < aFilterCount; ++i) {
|
|
|
|
mFilters[i] = aFilters[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mInterposeObserver) {
|
2017-05-17 00:35:05 +03:00
|
|
|
// We need to register the observer on the main thread, because we want
|
|
|
|
// to observe IO that happens on the main thread.
|
|
|
|
if (NS_IsMainThread()) {
|
|
|
|
IOInterposer::Register(IOInterposeObserver::OpAll, mInterposeObserver);
|
|
|
|
} else {
|
|
|
|
RefPtr<ProfilerIOInterposeObserver> observer = mInterposeObserver;
|
2017-06-12 22:34:10 +03:00
|
|
|
NS_DispatchToMainThread(
|
|
|
|
NS_NewRunnableFunction("ActivePS::ActivePS", [=]() {
|
|
|
|
IOInterposer::Register(IOInterposeObserver::OpAll, observer);
|
|
|
|
}));
|
2017-05-17 00:35:05 +03:00
|
|
|
}
|
2017-04-21 06:28:23 +03:00
|
|
|
}
|
|
|
|
}
|
2017-01-25 08:00:47 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
~ActivePS()
|
|
|
|
{
|
|
|
|
if (mInterposeObserver) {
|
2017-05-17 00:35:05 +03:00
|
|
|
// We need to unregister the observer on the main thread, because that's
|
|
|
|
// where we've registered it.
|
|
|
|
if (NS_IsMainThread()) {
|
|
|
|
IOInterposer::Unregister(IOInterposeObserver::OpAll, mInterposeObserver);
|
|
|
|
} else {
|
|
|
|
RefPtr<ProfilerIOInterposeObserver> observer = mInterposeObserver;
|
2017-06-12 22:34:10 +03:00
|
|
|
NS_DispatchToMainThread(
|
|
|
|
NS_NewRunnableFunction("ActivePS::~ActivePS", [=]() {
|
|
|
|
IOInterposer::Unregister(IOInterposeObserver::OpAll, observer);
|
|
|
|
}));
|
2017-05-17 00:35:05 +03:00
|
|
|
}
|
2017-04-21 06:28:23 +03:00
|
|
|
}
|
|
|
|
}
|
2017-02-07 06:16:26 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
bool ThreadSelected(const char* aThreadName)
|
|
|
|
{
|
|
|
|
MOZ_RELEASE_ASSERT(sInstance);
|
2017-02-07 06:24:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (mFilters.empty()) {
|
|
|
|
return true;
|
|
|
|
}
|
2017-02-07 07:22:27 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
std::string name = aThreadName;
|
|
|
|
std::transform(name.begin(), name.end(), name.begin(), ::tolower);
|
2017-02-07 07:22:27 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
for (uint32_t i = 0; i < mFilters.length(); ++i) {
|
|
|
|
std::string filter = mFilters[i];
|
|
|
|
std::transform(filter.begin(), filter.end(), filter.begin(), ::tolower);
|
2017-02-07 07:22:52 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// Crude, non UTF-8 compatible, case insensitive substring search
|
|
|
|
if (name.find(filter) != std::string::npos) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
return false;
|
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
public:
|
|
|
|
static void Create(PSLockRef aLock, int aEntries, double aInterval,
|
2017-05-01 07:23:34 +03:00
|
|
|
uint32_t aFeatures,
|
2017-04-21 06:28:23 +03:00
|
|
|
const char** aFilters, uint32_t aFilterCount)
|
|
|
|
{
|
|
|
|
sInstance = new ActivePS(aLock, aEntries, aInterval, aFeatures,
|
2017-05-01 07:23:34 +03:00
|
|
|
aFilters, aFilterCount);
|
2017-04-21 06:28:23 +03:00
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
static MOZ_MUST_USE SamplerThread* Destroy(PSLockRef aLock)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
auto samplerThread = sInstance->mSamplerThread;
|
|
|
|
delete sInstance;
|
|
|
|
sInstance = nullptr;
|
|
|
|
|
|
|
|
return samplerThread;
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
2017-02-08 01:06:16 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
static bool Exists(PSLockRef) { return !!sInstance; }
|
2017-02-08 03:12:26 +03:00
|
|
|
|
2017-07-24 23:48:15 +03:00
|
|
|
static bool Equals(PSLockRef,
|
|
|
|
int aEntries, double aInterval, uint32_t aFeatures,
|
|
|
|
const char** aFilters, uint32_t aFilterCount)
|
|
|
|
{
|
|
|
|
if (sInstance->mEntries != aEntries ||
|
|
|
|
sInstance->mInterval != aInterval ||
|
|
|
|
sInstance->mFeatures != aFeatures ||
|
|
|
|
sInstance->mFilters.length() != aFilterCount) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (uint32_t i = 0; i < sInstance->mFilters.length(); ++i) {
|
|
|
|
if (strcmp(sInstance->mFilters[i].c_str(), aFilters[i]) != 0) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
static size_t SizeOf(PSLockRef, MallocSizeOf aMallocSizeOf)
|
|
|
|
{
|
|
|
|
size_t n = aMallocSizeOf(sInstance);
|
2013-03-26 01:57:28 +04:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
n += sInstance->mBuffer->SizeOfIncludingThis(aMallocSizeOf);
|
2017-02-06 06:30:33 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
return n;
|
|
|
|
}
|
2017-02-07 07:56:48 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
static bool ShouldProfileThread(PSLockRef aLock, ThreadInfo* aInfo)
|
|
|
|
{
|
|
|
|
MOZ_RELEASE_ASSERT(sInstance);
|
2017-02-08 04:01:41 +03:00
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
return ((aInfo->IsMainThread() || FeatureThreads(aLock)) &&
|
2017-04-21 06:28:23 +03:00
|
|
|
sInstance->ThreadSelected(aInfo->Name()));
|
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
PS_GET(uint32_t, Generation)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
PS_GET(int, Entries)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
PS_GET(double, Interval)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
PS_GET(uint32_t, Features)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
#define PS_GET_FEATURE(n_, str_, Name_) \
|
|
|
|
static bool Feature##Name_(PSLockRef) \
|
|
|
|
{ \
|
|
|
|
return ProfilerFeature::Has##Name_(sInstance->mFeatures); \
|
|
|
|
}
|
|
|
|
|
|
|
|
PROFILER_FOR_EACH_FEATURE(PS_GET_FEATURE)
|
|
|
|
|
|
|
|
#undef PS_GET_FEATURE
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
PS_GET(const Vector<std::string>&, Filters)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-07-13 04:05:34 +03:00
|
|
|
static ProfileBuffer& Buffer(PSLockRef) { return *sInstance->mBuffer.get(); }
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
PS_GET_AND_SET(bool, IsPaused)
|
|
|
|
|
|
|
|
#if defined(GP_OS_linux)
|
|
|
|
PS_GET_AND_SET(bool, WasPaused)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
private:
|
|
|
|
// The singleton instance.
|
|
|
|
static ActivePS* sInstance;
|
|
|
|
|
|
|
|
// We need to track activity generations. If we didn't we could have the
|
|
|
|
// following scenario.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
2017-04-21 06:28:23 +03:00
|
|
|
// - profiler_stop() locks gPSMutex, de-instantiates ActivePS, unlocks
|
|
|
|
// gPSMutex, deletes the SamplerThread (which does a join).
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
2017-04-21 06:28:23 +03:00
|
|
|
// - profiler_start() runs on a different thread, locks gPSMutex,
|
|
|
|
// re-instantiates ActivePS, unlocks gPSMutex -- all before the join
|
|
|
|
// completes.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
2017-04-21 06:28:23 +03:00
|
|
|
// - SamplerThread::Run() locks gPSMutex, sees that ActivePS is instantiated,
|
|
|
|
// and continues as if the start/stop pair didn't occur. Also
|
|
|
|
// profiler_stop() is stuck, unable to finish.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
2017-04-21 06:28:23 +03:00
|
|
|
// By checking ActivePS *and* the generation, we can avoid this scenario.
|
|
|
|
// sNextGeneration is used to track the next generation number; it is static
|
|
|
|
// because it must persist across different ActivePS instantiations.
|
|
|
|
const uint32_t mGeneration;
|
|
|
|
static uint32_t sNextGeneration;
|
|
|
|
|
|
|
|
// The number of entries in mBuffer.
|
|
|
|
const int mEntries;
|
|
|
|
|
|
|
|
// The interval between samples, measured in milliseconds.
|
|
|
|
const double mInterval;
|
|
|
|
|
|
|
|
// The profile features that are enabled.
|
2017-05-01 07:23:34 +03:00
|
|
|
const uint32_t mFeatures;
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// Substrings of names of threads we want to profile.
|
|
|
|
Vector<std::string> mFilters;
|
|
|
|
|
2017-07-13 04:05:34 +03:00
|
|
|
// The buffer into which all samples are recorded. Always non-null. Always
|
|
|
|
// used in conjunction with CorePS::m{Live,Dead}Threads.
|
2017-04-21 06:28:23 +03:00
|
|
|
const UniquePtr<ProfileBuffer> mBuffer;
|
|
|
|
|
|
|
|
// The current sampler thread. This class is not responsible for destroying
|
|
|
|
// the SamplerThread object; the Destroy() method returns it so the caller
|
|
|
|
// can destroy it.
|
|
|
|
SamplerThread* const mSamplerThread;
|
|
|
|
|
|
|
|
// The interposer that records main thread I/O.
|
2017-06-02 02:41:48 +03:00
|
|
|
const RefPtr<ProfilerIOInterposeObserver> mInterposeObserver;
|
2017-04-21 06:28:23 +03:00
|
|
|
|
|
|
|
// Is the profiler paused?
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
bool mIsPaused;
|
|
|
|
|
2017-04-21 06:27:59 +03:00
|
|
|
#if defined(GP_OS_linux)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
// Used to record whether the profiler was paused just before forking. False
|
|
|
|
// at all times except just before/after forking.
|
|
|
|
bool mWasPaused;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
ActivePS* ActivePS::sInstance = nullptr;
|
|
|
|
uint32_t ActivePS::sNextGeneration = 0;
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
#undef PS_GET
|
2017-06-01 03:22:20 +03:00
|
|
|
#undef PS_GET_LOCKLESS
|
2017-04-21 06:28:23 +03:00
|
|
|
#undef PS_GET_AND_SET
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// The mutex that guards accesses to CorePS and ActivePS.
|
2017-04-21 06:27:53 +03:00
|
|
|
static PSMutex gPSMutex;
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-01 06:33:22 +03:00
|
|
|
// The preferred way to check profiler activeness and features is via
|
|
|
|
// ActivePS(). However, that requires locking gPSMutex. There are some hot
|
|
|
|
// operations where absolute precision isn't required, so we duplicate the
|
|
|
|
// activeness/feature state in a lock-free manner in this class.
|
|
|
|
class RacyFeatures
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
static void SetActive(uint32_t aFeatures)
|
|
|
|
{
|
|
|
|
sActiveAndFeatures = Active | aFeatures;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void SetInactive() { sActiveAndFeatures = 0; }
|
|
|
|
|
|
|
|
static bool IsActive() { return uint32_t(sActiveAndFeatures) & Active; }
|
|
|
|
|
|
|
|
static bool IsActiveWithFeature(uint32_t aFeature)
|
|
|
|
{
|
|
|
|
uint32_t af = sActiveAndFeatures; // copy it first
|
|
|
|
return (af & Active) && (af & aFeature);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool IsActiveWithoutPrivacy()
|
|
|
|
{
|
|
|
|
uint32_t af = sActiveAndFeatures; // copy it first
|
|
|
|
return (af & Active) && !(af & ProfilerFeature::Privacy);
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
static const uint32_t Active = 1u << 31;
|
|
|
|
|
|
|
|
// Ensure Active doesn't overlap with any of the feature bits.
|
|
|
|
#define NO_OVERLAP(n_, str_, Name_) \
|
|
|
|
static_assert(ProfilerFeature::Name_ != Active, "bad Active value");
|
|
|
|
|
|
|
|
PROFILER_FOR_EACH_FEATURE(NO_OVERLAP);
|
|
|
|
|
|
|
|
#undef NO_OVERLAP
|
|
|
|
|
|
|
|
// We combine the active bit with the feature bits so they can be read or
|
|
|
|
// written in a single atomic operation.
|
|
|
|
static Atomic<uint32_t> sActiveAndFeatures;
|
|
|
|
};
|
|
|
|
|
|
|
|
Atomic<uint32_t> RacyFeatures::sActiveAndFeatures(0);
|
|
|
|
|
2017-04-27 00:36:13 +03:00
|
|
|
// Each live thread has a ThreadInfo, and we store a reference to it in TLS.
|
|
|
|
// This class encapsulates that TLS.
|
|
|
|
class TLSInfo
|
|
|
|
{
|
|
|
|
public:
|
2017-05-10 13:13:21 +03:00
|
|
|
static bool Init(PSLockRef)
|
|
|
|
{
|
|
|
|
bool ok1 = sThreadInfo.init();
|
2017-06-21 23:26:16 +03:00
|
|
|
bool ok2 = AutoProfilerLabel::sPseudoStack.init();
|
2017-05-10 13:13:21 +03:00
|
|
|
return ok1 && ok2;
|
|
|
|
}
|
2017-04-27 00:36:13 +03:00
|
|
|
|
|
|
|
// Get the entire ThreadInfo. Accesses are guarded by gPSMutex.
|
|
|
|
static ThreadInfo* Info(PSLockRef) { return sThreadInfo.get(); }
|
|
|
|
|
2017-04-27 00:36:15 +03:00
|
|
|
// Get only the RacyThreadInfo. Accesses are not guarded by gPSMutex.
|
|
|
|
static RacyThreadInfo* RacyInfo()
|
2017-04-27 00:36:13 +03:00
|
|
|
{
|
|
|
|
ThreadInfo* info = sThreadInfo.get();
|
2017-04-27 00:36:15 +03:00
|
|
|
return info ? info->RacyInfo().get() : nullptr;
|
2017-04-27 00:36:13 +03:00
|
|
|
}
|
|
|
|
|
2017-05-10 13:13:21 +03:00
|
|
|
// Get only the PseudoStack. Accesses are not guarded by gPSMutex. RacyInfo()
|
|
|
|
// can also be used to get the PseudoStack, but that is marginally slower
|
|
|
|
// because it requires an extra pointer indirection.
|
2017-06-21 23:26:16 +03:00
|
|
|
static PseudoStack* Stack() { return AutoProfilerLabel::sPseudoStack.get(); }
|
2017-05-10 13:13:21 +03:00
|
|
|
|
|
|
|
static void SetInfo(PSLockRef, ThreadInfo* aInfo)
|
|
|
|
{
|
|
|
|
sThreadInfo.set(aInfo);
|
2017-06-21 23:26:16 +03:00
|
|
|
AutoProfilerLabel::sPseudoStack.set(
|
|
|
|
aInfo ? aInfo->RacyInfo().get() : nullptr); // an upcast
|
2017-05-10 13:13:21 +03:00
|
|
|
}
|
2017-04-27 00:36:13 +03:00
|
|
|
|
|
|
|
private:
|
|
|
|
// This is a non-owning reference to the ThreadInfo; CorePS::mLiveThreads is
|
|
|
|
// the owning reference. On thread destruction, this reference is cleared and
|
|
|
|
// the ThreadInfo is destroyed or transferred to CorePS::mDeadThreads.
|
|
|
|
static MOZ_THREAD_LOCAL(ThreadInfo*) sThreadInfo;
|
|
|
|
};
|
|
|
|
|
|
|
|
MOZ_THREAD_LOCAL(ThreadInfo*) TLSInfo::sThreadInfo;
|
2017-04-27 00:36:11 +03:00
|
|
|
|
2017-05-10 13:13:21 +03:00
|
|
|
// Although you can access a thread's PseudoStack via TLSInfo::sThreadInfo, we
|
|
|
|
// also have a second TLS pointer directly to the PseudoStack. Here's why.
|
|
|
|
//
|
|
|
|
// - We need to be able to push to and pop from the PseudoStack in
|
2017-06-07 05:33:19 +03:00
|
|
|
// AutoProfilerLabel.
|
2017-05-10 13:13:21 +03:00
|
|
|
//
|
2017-06-07 05:33:19 +03:00
|
|
|
// - The class functions are hot and must be defined in GeckoProfiler.h so they
|
2017-05-10 13:13:21 +03:00
|
|
|
// can be inlined.
|
|
|
|
//
|
|
|
|
// - We don't want to expose TLSInfo (and ThreadInfo) in GeckoProfiler.h.
|
|
|
|
//
|
|
|
|
// This second pointer isn't ideal, but does provide a way to satisfy those
|
2017-06-21 23:26:16 +03:00
|
|
|
// constraints. TLSInfo is responsible for updating it.
|
|
|
|
MOZ_THREAD_LOCAL(PseudoStack*) AutoProfilerLabel::sPseudoStack;
|
2017-05-10 13:13:21 +03:00
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
// The name of the main thread.
|
2017-03-07 08:54:56 +03:00
|
|
|
static const char* const kMainThreadName = "GeckoMain";
|
2013-10-22 17:30:06 +04:00
|
|
|
|
2017-02-09 01:02:41 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
2017-06-19 02:38:15 +03:00
|
|
|
// BEGIN sampling/unwinding code
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
// The registers used for stack unwinding and a few other sampling purposes.
|
2017-06-20 01:45:43 +03:00
|
|
|
// The ctor does nothing; users are responsible for filling in the fields.
|
2017-06-19 02:38:15 +03:00
|
|
|
class Registers
|
|
|
|
{
|
2017-02-09 01:02:41 +03:00
|
|
|
public:
|
2017-10-31 13:20:12 +03:00
|
|
|
Registers() : mPC{nullptr}, mSP{nullptr}, mFP{nullptr}, mLR{nullptr} {}
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-06-20 01:45:43 +03:00
|
|
|
#if defined(HAVE_NATIVE_UNWIND)
|
|
|
|
// Fills in mPC, mSP, mFP, mLR, and mContext for a synchronous sample.
|
2017-06-19 02:38:15 +03:00
|
|
|
void SyncPopulate();
|
2017-05-17 10:29:59 +03:00
|
|
|
#endif
|
2017-03-24 07:09:05 +03:00
|
|
|
|
2017-06-20 01:45:43 +03:00
|
|
|
void Clear() { memset(this, 0, sizeof(*this)); }
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
// These fields are filled in by
|
|
|
|
// SamplerThread::SuspendAndSampleAndResumeThread() for periodic and
|
|
|
|
// backtrace samples, and by SyncPopulate() for synchronous samples.
|
2017-03-24 07:09:05 +03:00
|
|
|
Address mPC; // Instruction pointer.
|
|
|
|
Address mSP; // Stack pointer.
|
|
|
|
Address mFP; // Frame pointer.
|
|
|
|
Address mLR; // ARM link register.
|
2017-06-19 02:38:15 +03:00
|
|
|
#if defined(GP_OS_linux) || defined(GP_OS_android)
|
|
|
|
// This contains all the registers, which means it duplicates the four fields
|
|
|
|
// above. This is ok.
|
|
|
|
ucontext_t* mContext; // The context from the signal handler.
|
|
|
|
#endif
|
2017-02-09 01:02:41 +03:00
|
|
|
};
|
|
|
|
|
2017-06-02 02:41:58 +03:00
|
|
|
// Setting MAX_NATIVE_FRAMES too high risks the unwinder wasting a lot of time
|
|
|
|
// looping on corrupted stacks.
|
|
|
|
//
|
|
|
|
// The PseudoStack frame size is found in PseudoStack::MaxEntries.
|
2017-06-02 02:41:55 +03:00
|
|
|
static const size_t MAX_NATIVE_FRAMES = 1024;
|
2017-06-02 02:41:58 +03:00
|
|
|
static const size_t MAX_JS_FRAMES = 1024;
|
2017-06-02 02:41:55 +03:00
|
|
|
|
2017-02-09 01:02:41 +03:00
|
|
|
struct NativeStack
|
|
|
|
{
|
2017-06-02 02:41:55 +03:00
|
|
|
void* mPCs[MAX_NATIVE_FRAMES];
|
|
|
|
void* mSPs[MAX_NATIVE_FRAMES];
|
|
|
|
size_t mCount; // Number of entries filled.
|
|
|
|
|
|
|
|
NativeStack()
|
2017-10-31 13:20:12 +03:00
|
|
|
: mPCs(), mSPs(), mCount(0)
|
2017-06-02 02:41:55 +03:00
|
|
|
{}
|
2017-02-09 01:02:41 +03:00
|
|
|
};
|
|
|
|
|
2017-06-02 02:41:48 +03:00
|
|
|
Atomic<bool> WALKING_JS_STACK(false);
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
struct AutoWalkJSStack
|
|
|
|
{
|
|
|
|
bool walkAllowed;
|
|
|
|
|
|
|
|
AutoWalkJSStack() : walkAllowed(false) {
|
|
|
|
walkAllowed = WALKING_JS_STACK.compareExchange(false, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
~AutoWalkJSStack() {
|
|
|
|
if (walkAllowed) {
|
|
|
|
WALKING_JS_STACK = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2017-07-25 09:47:14 +03:00
|
|
|
// Merges the pseudo-stack, native stack, and JS stack, outputting the details
|
|
|
|
// to aCollector.
|
2017-02-09 01:02:41 +03:00
|
|
|
static void
|
2017-07-25 09:47:14 +03:00
|
|
|
MergeStacks(uint32_t aFeatures, bool aIsSynchronous,
|
|
|
|
const ThreadInfo& aThreadInfo, const Registers& aRegs,
|
|
|
|
const NativeStack& aNativeStack,
|
|
|
|
ProfilerStackCollector& aCollector)
|
2017-02-09 01:02:41 +03:00
|
|
|
{
|
2017-06-19 02:09:46 +03:00
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
2017-07-25 09:47:14 +03:00
|
|
|
// WARNING: this function might be called while the profiler is inactive, and
|
|
|
|
// cannot rely on ActivePS.
|
2017-06-19 02:09:46 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
NotNull<RacyThreadInfo*> racyInfo = aThreadInfo.RacyInfo();
|
2017-06-02 10:16:56 +03:00
|
|
|
js::ProfileEntry* pseudoEntries = racyInfo->entries;
|
2017-04-27 00:36:15 +03:00
|
|
|
uint32_t pseudoCount = racyInfo->stackSize();
|
2017-06-19 02:38:15 +03:00
|
|
|
JSContext* context = aThreadInfo.mContext;
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
// Make a copy of the JS stack into a JSFrame array. This is necessary since,
|
|
|
|
// like the native stack, the JS stack is iterated youngest-to-oldest and we
|
|
|
|
// need to iterate oldest-to-youngest when adding entries to aInfo.
|
|
|
|
|
|
|
|
// Synchronous sampling reports an invalid buffer generation to
|
|
|
|
// ProfilingFrameIterator to avoid incorrectly resetting the generation of
|
|
|
|
// sampled JIT entries inside the JS engine. See note below concerning 'J'
|
|
|
|
// entries.
|
2017-07-25 09:47:14 +03:00
|
|
|
uint32_t startBufferGen = UINT32_MAX;
|
|
|
|
if (!aIsSynchronous && aCollector.Generation().isSome()) {
|
|
|
|
startBufferGen = *aCollector.Generation();
|
|
|
|
}
|
2017-02-09 01:02:41 +03:00
|
|
|
uint32_t jsCount = 0;
|
2017-06-02 02:41:58 +03:00
|
|
|
JS::ProfilingFrameIterator::Frame jsFrames[MAX_JS_FRAMES];
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
// Only walk jit stack if profiling frame iterator is turned on.
|
2017-04-27 00:36:17 +03:00
|
|
|
if (context && JS::IsProfilingEnabledForContext(context)) {
|
2017-02-09 01:02:41 +03:00
|
|
|
AutoWalkJSStack autoWalkJSStack;
|
2017-06-02 02:41:48 +03:00
|
|
|
const uint32_t maxFrames = ArrayLength(jsFrames);
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-04-12 07:56:41 +03:00
|
|
|
if (autoWalkJSStack.walkAllowed) {
|
2017-02-09 01:02:41 +03:00
|
|
|
JS::ProfilingFrameIterator::RegisterState registerState;
|
2017-06-19 02:38:15 +03:00
|
|
|
registerState.pc = aRegs.mPC;
|
|
|
|
registerState.sp = aRegs.mSP;
|
|
|
|
registerState.lr = aRegs.mLR;
|
|
|
|
registerState.fp = aRegs.mFP;
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-04-27 00:36:17 +03:00
|
|
|
JS::ProfilingFrameIterator jsIter(context, registerState,
|
2017-02-09 01:02:41 +03:00
|
|
|
startBufferGen);
|
|
|
|
for (; jsCount < maxFrames && !jsIter.done(); ++jsIter) {
|
|
|
|
// See note below regarding 'J' entries.
|
2017-06-19 02:38:15 +03:00
|
|
|
if (aIsSynchronous || jsIter.isWasm()) {
|
2017-02-09 01:02:41 +03:00
|
|
|
uint32_t extracted =
|
|
|
|
jsIter.extractStack(jsFrames, jsCount, maxFrames);
|
|
|
|
jsCount += extracted;
|
|
|
|
if (jsCount == maxFrames) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} else {
|
2017-06-02 02:41:48 +03:00
|
|
|
Maybe<JS::ProfilingFrameIterator::Frame> frame =
|
2017-02-09 01:02:41 +03:00
|
|
|
jsIter.getPhysicalFrameWithoutLabel();
|
|
|
|
if (frame.isSome()) {
|
|
|
|
jsFrames[jsCount++] = frame.value();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// While the pseudo-stack array is ordered oldest-to-youngest, the JS and
|
|
|
|
// native arrays are ordered youngest-to-oldest. We must add frames to aInfo
|
|
|
|
// oldest-to-youngest. Thus, iterate over the pseudo-stack forwards and JS
|
|
|
|
// and native arrays backwards. Note: this means the terminating condition
|
|
|
|
// jsIndex and nativeIndex is being < 0.
|
|
|
|
uint32_t pseudoIndex = 0;
|
|
|
|
int32_t jsIndex = jsCount - 1;
|
2017-06-02 02:41:55 +03:00
|
|
|
int32_t nativeIndex = aNativeStack.mCount - 1;
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
uint8_t* lastPseudoCppStackAddr = nullptr;
|
|
|
|
|
|
|
|
// Iterate as long as there is at least one frame remaining.
|
|
|
|
while (pseudoIndex != pseudoCount || jsIndex >= 0 || nativeIndex >= 0) {
|
|
|
|
// There are 1 to 3 frames available. Find and add the oldest.
|
|
|
|
uint8_t* pseudoStackAddr = nullptr;
|
|
|
|
uint8_t* jsStackAddr = nullptr;
|
|
|
|
uint8_t* nativeStackAddr = nullptr;
|
|
|
|
|
|
|
|
if (pseudoIndex != pseudoCount) {
|
2017-06-02 10:16:56 +03:00
|
|
|
js::ProfileEntry& pseudoEntry = pseudoEntries[pseudoIndex];
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-05-26 02:37:28 +03:00
|
|
|
if (pseudoEntry.isCpp()) {
|
|
|
|
lastPseudoCppStackAddr = (uint8_t*) pseudoEntry.stackAddress();
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
|
2017-06-02 05:46:09 +03:00
|
|
|
// Skip any JS_OSR frames. Such frames are used when the JS interpreter
|
|
|
|
// enters a jit frame on a loop edge (via on-stack-replacement, or OSR).
|
|
|
|
// To avoid both the pseudoframe and jit frame being recorded (and
|
|
|
|
// showing up twice), the interpreter marks the interpreter pseudostack
|
|
|
|
// frame as JS_OSR to ensure that it doesn't get counted.
|
|
|
|
if (pseudoEntry.kind() == js::ProfileEntry::Kind::JS_OSR) {
|
2017-02-09 01:02:41 +03:00
|
|
|
pseudoIndex++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
MOZ_ASSERT(lastPseudoCppStackAddr);
|
|
|
|
pseudoStackAddr = lastPseudoCppStackAddr;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (jsIndex >= 0) {
|
|
|
|
jsStackAddr = (uint8_t*) jsFrames[jsIndex].stackAddress;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nativeIndex >= 0) {
|
2017-06-02 02:41:55 +03:00
|
|
|
nativeStackAddr = (uint8_t*) aNativeStack.mSPs[nativeIndex];
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// If there's a native stack entry which has the same SP as a pseudo stack
|
|
|
|
// entry, pretend we didn't see the native stack entry. Ditto for a native
|
|
|
|
// stack entry which has the same SP as a JS stack entry. In effect this
|
|
|
|
// means pseudo or JS entries trump conflicting native entries.
|
|
|
|
if (nativeStackAddr && (pseudoStackAddr == nativeStackAddr ||
|
|
|
|
jsStackAddr == nativeStackAddr)) {
|
|
|
|
nativeStackAddr = nullptr;
|
|
|
|
nativeIndex--;
|
|
|
|
MOZ_ASSERT(pseudoStackAddr || jsStackAddr);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Sanity checks.
|
|
|
|
MOZ_ASSERT_IF(pseudoStackAddr, pseudoStackAddr != jsStackAddr &&
|
|
|
|
pseudoStackAddr != nativeStackAddr);
|
|
|
|
MOZ_ASSERT_IF(jsStackAddr, jsStackAddr != pseudoStackAddr &&
|
|
|
|
jsStackAddr != nativeStackAddr);
|
|
|
|
MOZ_ASSERT_IF(nativeStackAddr, nativeStackAddr != pseudoStackAddr &&
|
|
|
|
nativeStackAddr != jsStackAddr);
|
|
|
|
|
|
|
|
// Check to see if pseudoStack frame is top-most.
|
|
|
|
if (pseudoStackAddr > jsStackAddr && pseudoStackAddr > nativeStackAddr) {
|
|
|
|
MOZ_ASSERT(pseudoIndex < pseudoCount);
|
2017-06-02 10:16:56 +03:00
|
|
|
js::ProfileEntry& pseudoEntry = pseudoEntries[pseudoIndex];
|
2017-06-02 05:46:09 +03:00
|
|
|
|
|
|
|
// Pseudo-frames with the CPP_MARKER_FOR_JS kind are just annotations and
|
|
|
|
// should not be recorded in the profile.
|
|
|
|
if (pseudoEntry.kind() != js::ProfileEntry::Kind::CPP_MARKER_FOR_JS) {
|
2017-08-03 18:25:17 +03:00
|
|
|
// The JIT only allows the top-most entry to have a nullptr pc.
|
|
|
|
MOZ_ASSERT_IF(pseudoEntry.isJs() && pseudoEntry.script() && !pseudoEntry.pc(),
|
|
|
|
&pseudoEntry == &racyInfo->entries[racyInfo->stackSize() - 1]);
|
|
|
|
aCollector.CollectPseudoEntry(pseudoEntry);
|
2017-06-02 05:46:09 +03:00
|
|
|
}
|
2017-02-09 01:02:41 +03:00
|
|
|
pseudoIndex++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check to see if JS jit stack frame is top-most
|
|
|
|
if (jsStackAddr > nativeStackAddr) {
|
|
|
|
MOZ_ASSERT(jsIndex >= 0);
|
|
|
|
const JS::ProfilingFrameIterator::Frame& jsFrame = jsFrames[jsIndex];
|
|
|
|
|
|
|
|
// Stringifying non-wasm JIT frames is delayed until streaming time. To
|
|
|
|
// re-lookup the entry in the JitcodeGlobalTable, we need to store the
|
|
|
|
// JIT code address (OptInfoAddr) in the circular buffer.
|
|
|
|
//
|
|
|
|
// Note that we cannot do this when we are sychronously sampling the
|
|
|
|
// current thread; that is, when called from profiler_get_backtrace. The
|
|
|
|
// captured backtrace is usually externally stored for an indeterminate
|
|
|
|
// amount of time, such as in nsRefreshDriver. Problematically, the
|
|
|
|
// stored backtrace may be alive across a GC during which the profiler
|
|
|
|
// itself is disabled. In that case, the JS engine is free to discard its
|
|
|
|
// JIT code. This means that if we inserted such OptInfoAddr entries into
|
|
|
|
// the buffer, nsRefreshDriver would now be holding on to a backtrace
|
|
|
|
// with stale JIT code return addresses.
|
2017-06-19 02:38:15 +03:00
|
|
|
if (aIsSynchronous ||
|
2017-02-09 01:02:41 +03:00
|
|
|
jsFrame.kind == JS::ProfilingFrameIterator::Frame_Wasm) {
|
2017-08-03 18:25:17 +03:00
|
|
|
aCollector.CollectWasmFrame(jsFrame.label);
|
2017-02-09 01:02:41 +03:00
|
|
|
} else {
|
|
|
|
MOZ_ASSERT(jsFrame.kind == JS::ProfilingFrameIterator::Frame_Ion ||
|
|
|
|
jsFrame.kind == JS::ProfilingFrameIterator::Frame_Baseline);
|
2017-07-25 09:47:14 +03:00
|
|
|
aCollector.CollectJitReturnAddr(jsFrames[jsIndex].returnAddress);
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
jsIndex--;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we reach here, there must be a native stack entry and it must be the
|
|
|
|
// greatest entry.
|
|
|
|
if (nativeStackAddr) {
|
|
|
|
MOZ_ASSERT(nativeIndex >= 0);
|
2017-06-02 02:41:55 +03:00
|
|
|
void* addr = (void*)aNativeStack.mPCs[nativeIndex];
|
2017-07-25 09:47:14 +03:00
|
|
|
aCollector.CollectNativeLeafAddr(addr);
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
if (nativeIndex >= 0) {
|
|
|
|
nativeIndex--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Update the JS context with the current profile sample buffer generation.
|
|
|
|
//
|
2017-03-31 02:35:54 +03:00
|
|
|
// Do not do this for synchronous samples, which use their own
|
2017-04-21 06:28:23 +03:00
|
|
|
// ProfileBuffers instead of the global one in CorePS.
|
2017-07-25 09:47:14 +03:00
|
|
|
if (!aIsSynchronous && context && aCollector.Generation().isSome()) {
|
|
|
|
MOZ_ASSERT(*aCollector.Generation() >= startBufferGen);
|
|
|
|
uint32_t lapCount = *aCollector.Generation() - startBufferGen;
|
|
|
|
JS::UpdateJSContextProfilerSampleBufferGen(context,
|
|
|
|
*aCollector.Generation(),
|
2017-02-09 01:02:41 +03:00
|
|
|
lapCount);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-17 16:57:03 +03:00
|
|
|
#if defined(GP_OS_windows)
|
2017-07-27 05:46:47 +03:00
|
|
|
static HANDLE GetThreadHandle(PlatformData* aData);
|
2017-02-09 01:02:41 +03:00
|
|
|
#endif
|
|
|
|
|
2017-07-22 05:04:10 +03:00
|
|
|
#if defined(USE_FRAME_POINTER_STACK_WALK) || defined(USE_MOZ_STACK_WALK)
|
2017-02-09 01:02:41 +03:00
|
|
|
static void
|
|
|
|
StackWalkCallback(uint32_t aFrameNumber, void* aPC, void* aSP, void* aClosure)
|
|
|
|
{
|
|
|
|
NativeStack* nativeStack = static_cast<NativeStack*>(aClosure);
|
2017-06-02 02:41:55 +03:00
|
|
|
MOZ_ASSERT(nativeStack->mCount < MAX_NATIVE_FRAMES);
|
|
|
|
nativeStack->mSPs[nativeStack->mCount] = aSP;
|
|
|
|
nativeStack->mPCs[nativeStack->mCount] = aPC;
|
|
|
|
nativeStack->mCount++;
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
2017-08-04 21:08:28 +03:00
|
|
|
#endif
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-08-04 21:08:28 +03:00
|
|
|
#if defined(USE_FRAME_POINTER_STACK_WALK)
|
2017-02-09 01:02:41 +03:00
|
|
|
static void
|
2017-08-04 21:08:28 +03:00
|
|
|
DoFramePointerBacktrace(PSLockRef aLock, const ThreadInfo& aThreadInfo,
|
|
|
|
const Registers& aRegs, NativeStack& aNativeStack)
|
2017-02-09 01:02:41 +03:00
|
|
|
{
|
2017-06-19 02:09:46 +03:00
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
2017-07-25 09:47:14 +03:00
|
|
|
// WARNING: this function might be called while the profiler is inactive, and
|
|
|
|
// cannot rely on ActivePS.
|
2017-06-19 02:09:46 +03:00
|
|
|
|
2017-02-09 01:02:41 +03:00
|
|
|
// Start with the current function. We use 0 as the frame number here because
|
2017-08-04 21:08:28 +03:00
|
|
|
// the FramePointerStackWalk() call below will use 1..N. This is a bit weird
|
|
|
|
// but it doesn't matter because StackWalkCallback() doesn't use the frame
|
|
|
|
// number argument.
|
2017-06-19 02:38:15 +03:00
|
|
|
StackWalkCallback(/* frameNum */ 0, aRegs.mPC, aRegs.mSP, &aNativeStack);
|
2017-02-09 01:02:41 +03:00
|
|
|
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
uint32_t maxFrames = uint32_t(MAX_NATIVE_FRAMES - aNativeStack.mCount);
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
void* stackEnd = aThreadInfo.StackTop();
|
|
|
|
if (aRegs.mFP >= aRegs.mSP && aRegs.mFP <= stackEnd) {
|
2017-02-09 01:02:41 +03:00
|
|
|
FramePointerStackWalk(StackWalkCallback, /* skipFrames */ 0, maxFrames,
|
2017-06-19 02:38:15 +03:00
|
|
|
&aNativeStack, reinterpret_cast<void**>(aRegs.mFP),
|
2017-02-09 01:02:41 +03:00
|
|
|
stackEnd);
|
|
|
|
}
|
2017-08-04 21:08:28 +03:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if defined(USE_MOZ_STACK_WALK)
|
|
|
|
static void
|
|
|
|
DoMozStackWalkBacktrace(PSLockRef aLock, const ThreadInfo& aThreadInfo,
|
|
|
|
const Registers& aRegs, NativeStack& aNativeStack)
|
|
|
|
{
|
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
|
|
|
// WARNING: this function might be called while the profiler is inactive, and
|
|
|
|
// cannot rely on ActivePS.
|
|
|
|
|
|
|
|
// Start with the current function. We use 0 as the frame number here because
|
|
|
|
// the MozStackWalkThread() call below will use 1..N. This is a bit weird but
|
|
|
|
// it doesn't matter because StackWalkCallback() doesn't use the frame number
|
|
|
|
// argument.
|
|
|
|
StackWalkCallback(/* frameNum */ 0, aRegs.mPC, aRegs.mSP, &aNativeStack);
|
|
|
|
|
|
|
|
uint32_t maxFrames = uint32_t(MAX_NATIVE_FRAMES - aNativeStack.mCount);
|
|
|
|
|
2017-07-27 05:46:47 +03:00
|
|
|
HANDLE thread = GetThreadHandle(aThreadInfo.GetPlatformData());
|
2017-02-09 01:02:41 +03:00
|
|
|
MOZ_ASSERT(thread);
|
2017-07-27 05:46:47 +03:00
|
|
|
MozStackWalkThread(StackWalkCallback, /* skipFrames */ 0, maxFrames,
|
|
|
|
&aNativeStack, thread, /* context */ nullptr);
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef USE_EHABI_STACKWALK
|
|
|
|
static void
|
2017-08-04 21:08:28 +03:00
|
|
|
DoEHABIBacktrace(PSLockRef aLock, const ThreadInfo& aThreadInfo,
|
|
|
|
const Registers& aRegs, NativeStack& aNativeStack)
|
2017-02-09 01:02:41 +03:00
|
|
|
{
|
2017-06-19 02:09:46 +03:00
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
2017-07-25 09:47:14 +03:00
|
|
|
// WARNING: this function might be called while the profiler is inactive, and
|
|
|
|
// cannot rely on ActivePS.
|
2017-06-19 02:09:46 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
const mcontext_t* mcontext = &aRegs.mContext->uc_mcontext;
|
2017-02-09 01:02:41 +03:00
|
|
|
mcontext_t savedContext;
|
2017-06-19 02:38:15 +03:00
|
|
|
NotNull<RacyThreadInfo*> racyInfo = aThreadInfo.RacyInfo();
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
// The pseudostack contains an "EnterJIT" frame whenever we enter
|
|
|
|
// JIT code with profiling enabled; the stack pointer value points
|
|
|
|
// the saved registers. We use this to unwind resume unwinding
|
|
|
|
// after encounting JIT code.
|
2017-04-27 00:36:15 +03:00
|
|
|
for (uint32_t i = racyInfo->stackSize(); i > 0; --i) {
|
2017-02-09 01:02:41 +03:00
|
|
|
// The pseudostack grows towards higher indices, so we iterate
|
|
|
|
// backwards (from callee to caller).
|
2017-06-02 10:16:56 +03:00
|
|
|
js::ProfileEntry& entry = racyInfo->entries[i - 1];
|
2017-02-09 01:02:41 +03:00
|
|
|
if (!entry.isJs() && strcmp(entry.label(), "EnterJIT") == 0) {
|
|
|
|
// Found JIT entry frame. Unwind up to that point (i.e., force
|
|
|
|
// the stack walk to stop before the block of saved registers;
|
|
|
|
// note that it yields nondecreasing stack pointers), then restore
|
|
|
|
// the saved state.
|
|
|
|
uint32_t* vSP = reinterpret_cast<uint32_t*>(entry.stackAddress());
|
|
|
|
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
aNativeStack.mCount +=
|
2017-06-02 02:41:55 +03:00
|
|
|
EHABIStackWalk(*mcontext, /* stackBase = */ vSP,
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
aNativeStack.mSPs + aNativeStack.mCount,
|
|
|
|
aNativeStack.mPCs + aNativeStack.mCount,
|
|
|
|
MAX_NATIVE_FRAMES - aNativeStack.mCount);
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
memset(&savedContext, 0, sizeof(savedContext));
|
|
|
|
|
|
|
|
// See also: struct EnterJITStack in js/src/jit/arm/Trampoline-arm.cpp
|
|
|
|
savedContext.arm_r4 = *vSP++;
|
|
|
|
savedContext.arm_r5 = *vSP++;
|
|
|
|
savedContext.arm_r6 = *vSP++;
|
|
|
|
savedContext.arm_r7 = *vSP++;
|
|
|
|
savedContext.arm_r8 = *vSP++;
|
|
|
|
savedContext.arm_r9 = *vSP++;
|
|
|
|
savedContext.arm_r10 = *vSP++;
|
|
|
|
savedContext.arm_fp = *vSP++;
|
|
|
|
savedContext.arm_lr = *vSP++;
|
|
|
|
savedContext.arm_sp = reinterpret_cast<uint32_t>(vSP);
|
|
|
|
savedContext.arm_pc = savedContext.arm_lr;
|
|
|
|
mcontext = &savedContext;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now unwind whatever's left (starting from either the last EnterJIT frame
|
|
|
|
// or, if no EnterJIT was found, the original registers).
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
aNativeStack.mCount +=
|
2017-06-19 02:38:15 +03:00
|
|
|
EHABIStackWalk(*mcontext, aThreadInfo.StackTop(),
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
aNativeStack.mSPs + aNativeStack.mCount,
|
|
|
|
aNativeStack.mPCs + aNativeStack.mCount,
|
|
|
|
MAX_NATIVE_FRAMES - aNativeStack.mCount);
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef USE_LUL_STACKWALK
|
2017-03-29 07:49:44 +03:00
|
|
|
|
|
|
|
// See the comment at the callsite for why this function is necessary.
|
|
|
|
#if defined(MOZ_HAVE_ASAN_BLACKLIST)
|
|
|
|
MOZ_ASAN_BLACKLIST static void
|
|
|
|
ASAN_memcpy(void* aDst, const void* aSrc, size_t aLen)
|
|
|
|
{
|
|
|
|
// The obvious thing to do here is call memcpy(). However, although
|
|
|
|
// ASAN_memcpy() is not instrumented by ASAN, memcpy() still is, and the
|
|
|
|
// false positive still manifests! So we must implement memcpy() ourselves
|
|
|
|
// within this function.
|
|
|
|
char* dst = static_cast<char*>(aDst);
|
|
|
|
const char* src = static_cast<const char*>(aSrc);
|
|
|
|
|
|
|
|
for (size_t i = 0; i < aLen; i++) {
|
|
|
|
dst[i] = src[i];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-02-09 01:02:41 +03:00
|
|
|
static void
|
2017-08-04 21:08:28 +03:00
|
|
|
DoLULBacktrace(PSLockRef aLock, const ThreadInfo& aThreadInfo,
|
|
|
|
const Registers& aRegs, NativeStack& aNativeStack)
|
2017-02-09 01:02:41 +03:00
|
|
|
{
|
2017-06-19 02:09:46 +03:00
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
2017-07-25 09:47:14 +03:00
|
|
|
// WARNING: this function might be called while the profiler is inactive, and
|
|
|
|
// cannot rely on ActivePS.
|
2017-06-19 02:09:46 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
const mcontext_t* mc = &aRegs.mContext->uc_mcontext;
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
lul::UnwindRegs startRegs;
|
|
|
|
memset(&startRegs, 0, sizeof(startRegs));
|
|
|
|
|
2017-02-17 16:57:03 +03:00
|
|
|
#if defined(GP_PLAT_amd64_linux)
|
2017-02-09 01:02:41 +03:00
|
|
|
startRegs.xip = lul::TaggedUWord(mc->gregs[REG_RIP]);
|
|
|
|
startRegs.xsp = lul::TaggedUWord(mc->gregs[REG_RSP]);
|
|
|
|
startRegs.xbp = lul::TaggedUWord(mc->gregs[REG_RBP]);
|
2017-02-17 16:57:03 +03:00
|
|
|
#elif defined(GP_PLAT_arm_android)
|
2017-02-09 01:02:41 +03:00
|
|
|
startRegs.r15 = lul::TaggedUWord(mc->arm_pc);
|
|
|
|
startRegs.r14 = lul::TaggedUWord(mc->arm_lr);
|
|
|
|
startRegs.r13 = lul::TaggedUWord(mc->arm_sp);
|
|
|
|
startRegs.r12 = lul::TaggedUWord(mc->arm_ip);
|
|
|
|
startRegs.r11 = lul::TaggedUWord(mc->arm_fp);
|
|
|
|
startRegs.r7 = lul::TaggedUWord(mc->arm_r7);
|
2017-02-17 16:57:03 +03:00
|
|
|
#elif defined(GP_PLAT_x86_linux) || defined(GP_PLAT_x86_android)
|
2017-02-09 01:02:41 +03:00
|
|
|
startRegs.xip = lul::TaggedUWord(mc->gregs[REG_EIP]);
|
|
|
|
startRegs.xsp = lul::TaggedUWord(mc->gregs[REG_ESP]);
|
|
|
|
startRegs.xbp = lul::TaggedUWord(mc->gregs[REG_EBP]);
|
2017-11-13 09:23:50 +03:00
|
|
|
#elif defined(GP_PLAT_mips64_linux)
|
|
|
|
startRegs.pc = lul::TaggedUWord(mc->pc);
|
|
|
|
startRegs.sp = lul::TaggedUWord(mc->gregs[29]);
|
|
|
|
startRegs.fp = lul::TaggedUWord(mc->gregs[30]);
|
2017-02-09 01:02:41 +03:00
|
|
|
#else
|
|
|
|
# error "Unknown plat"
|
|
|
|
#endif
|
|
|
|
|
|
|
|
// Copy up to N_STACK_BYTES from rsp-REDZONE upwards, but not going past the
|
|
|
|
// stack's registered top point. Do some basic sanity checks too. This
|
|
|
|
// assumes that the TaggedUWord holding the stack pointer value is valid, but
|
|
|
|
// it should be, since it was constructed that way in the code just above.
|
|
|
|
|
2017-04-18 11:30:14 +03:00
|
|
|
// We could construct |stackImg| so that LUL reads directly from the stack in
|
|
|
|
// question, rather than from a copy of it. That would reduce overhead and
|
|
|
|
// space use a bit. However, it gives a problem with dynamic analysis tools
|
|
|
|
// (ASan, TSan, Valgrind) which is that such tools will report invalid or
|
|
|
|
// racing memory accesses, and such accesses will be reported deep inside LUL.
|
|
|
|
// By taking a copy here, we can either sanitise the copy (for Valgrind) or
|
|
|
|
// copy it using an unchecked memcpy (for ASan, TSan). That way we don't have
|
|
|
|
// to try and suppress errors inside LUL.
|
|
|
|
//
|
|
|
|
// N_STACK_BYTES is set to 160KB. This is big enough to hold all stacks
|
|
|
|
// observed in some minutes of testing, whilst keeping the size of this
|
|
|
|
// function (DoNativeBacktrace)'s frame reasonable. Most stacks observed in
|
|
|
|
// practice are small, 4KB or less, and so the copy costs are insignificant
|
|
|
|
// compared to other profiler overhead.
|
|
|
|
//
|
|
|
|
// |stackImg| is allocated on this (the sampling thread's) stack. That
|
|
|
|
// implies that the frame for this function is at least N_STACK_BYTES large.
|
|
|
|
// In general it would be considered unacceptable to have such a large frame
|
|
|
|
// on a stack, but it only exists for the unwinder thread, and so is not
|
|
|
|
// expected to be a problem. Allocating it on the heap is troublesome because
|
|
|
|
// this function runs whilst the sampled thread is suspended, so any heap
|
|
|
|
// allocation risks deadlock. Allocating it as a global variable is not
|
|
|
|
// thread safe, which would be a problem if we ever allow multiple sampler
|
|
|
|
// threads. Hence allocating it on the stack seems to be the least-worst
|
|
|
|
// option.
|
|
|
|
|
2017-02-09 01:02:41 +03:00
|
|
|
lul::StackImage stackImg;
|
|
|
|
|
|
|
|
{
|
2017-02-17 16:57:03 +03:00
|
|
|
#if defined(GP_PLAT_amd64_linux)
|
2017-02-09 01:02:41 +03:00
|
|
|
uintptr_t rEDZONE_SIZE = 128;
|
|
|
|
uintptr_t start = startRegs.xsp.Value() - rEDZONE_SIZE;
|
2017-02-17 16:57:03 +03:00
|
|
|
#elif defined(GP_PLAT_arm_android)
|
2017-02-09 01:02:41 +03:00
|
|
|
uintptr_t rEDZONE_SIZE = 0;
|
|
|
|
uintptr_t start = startRegs.r13.Value() - rEDZONE_SIZE;
|
2017-02-17 16:57:03 +03:00
|
|
|
#elif defined(GP_PLAT_x86_linux) || defined(GP_PLAT_x86_android)
|
2017-02-09 01:02:41 +03:00
|
|
|
uintptr_t rEDZONE_SIZE = 0;
|
|
|
|
uintptr_t start = startRegs.xsp.Value() - rEDZONE_SIZE;
|
2017-11-13 09:23:50 +03:00
|
|
|
#elif defined(GP_PLAT_mips64_linux)
|
|
|
|
uintptr_t rEDZONE_SIZE = 0;
|
|
|
|
uintptr_t start = startRegs.sp.Value() - rEDZONE_SIZE;
|
2017-02-09 01:02:41 +03:00
|
|
|
#else
|
|
|
|
# error "Unknown plat"
|
|
|
|
#endif
|
2017-06-19 02:38:15 +03:00
|
|
|
uintptr_t end = reinterpret_cast<uintptr_t>(aThreadInfo.StackTop());
|
2017-02-27 04:34:59 +03:00
|
|
|
uintptr_t ws = sizeof(void*);
|
2017-02-09 01:02:41 +03:00
|
|
|
start &= ~(ws-1);
|
|
|
|
end &= ~(ws-1);
|
|
|
|
uintptr_t nToCopy = 0;
|
|
|
|
if (start < end) {
|
|
|
|
nToCopy = end - start;
|
|
|
|
if (nToCopy > lul::N_STACK_BYTES)
|
|
|
|
nToCopy = lul::N_STACK_BYTES;
|
|
|
|
}
|
|
|
|
MOZ_ASSERT(nToCopy <= lul::N_STACK_BYTES);
|
|
|
|
stackImg.mLen = nToCopy;
|
|
|
|
stackImg.mStartAvma = start;
|
|
|
|
if (nToCopy > 0) {
|
2017-03-29 07:49:44 +03:00
|
|
|
// If this is a vanilla memcpy(), ASAN makes the following complaint:
|
|
|
|
//
|
|
|
|
// ERROR: AddressSanitizer: stack-buffer-underflow ...
|
|
|
|
// ...
|
|
|
|
// HINT: this may be a false positive if your program uses some custom
|
|
|
|
// stack unwind mechanism or swapcontext
|
|
|
|
//
|
|
|
|
// This code is very much a custom stack unwind mechanism! So we use an
|
|
|
|
// alternative memcpy() implementation that is ignored by ASAN.
|
|
|
|
#if defined(MOZ_HAVE_ASAN_BLACKLIST)
|
|
|
|
ASAN_memcpy(&stackImg.mContents[0], (void*)start, nToCopy);
|
|
|
|
#else
|
2017-02-09 01:02:41 +03:00
|
|
|
memcpy(&stackImg.mContents[0], (void*)start, nToCopy);
|
2017-03-29 07:49:44 +03:00
|
|
|
#endif
|
2017-02-09 01:02:41 +03:00
|
|
|
(void)VALGRIND_MAKE_MEM_DEFINED(&stackImg.mContents[0], nToCopy);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-06-19 17:21:59 +03:00
|
|
|
size_t framePointerFramesAcquired = 0;
|
2017-07-13 02:35:14 +03:00
|
|
|
lul::LUL* lul = CorePS::Lul(aLock);
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
lul->Unwind(reinterpret_cast<uintptr_t*>(aNativeStack.mPCs),
|
|
|
|
reinterpret_cast<uintptr_t*>(aNativeStack.mSPs),
|
|
|
|
&aNativeStack.mCount, &framePointerFramesAcquired,
|
2017-06-19 17:21:59 +03:00
|
|
|
MAX_NATIVE_FRAMES, &startRegs, &stackImg);
|
2017-02-09 01:02:41 +03:00
|
|
|
|
|
|
|
// Update stats in the LUL stats object. Unfortunately this requires
|
|
|
|
// three global memory operations.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
lul->mStats.mContext += 1;
|
2017-06-19 17:21:59 +03:00
|
|
|
lul->mStats.mCFI += aNativeStack.mCount - 1 - framePointerFramesAcquired;
|
2017-04-25 09:14:23 +03:00
|
|
|
lul->mStats.mFP += framePointerFramesAcquired;
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
2017-03-29 07:49:44 +03:00
|
|
|
|
2017-02-09 01:02:41 +03:00
|
|
|
#endif
|
|
|
|
|
2017-08-04 21:08:28 +03:00
|
|
|
#ifdef HAVE_NATIVE_UNWIND
|
|
|
|
static void
|
|
|
|
DoNativeBacktrace(PSLockRef aLock, const ThreadInfo& aThreadInfo,
|
|
|
|
const Registers& aRegs, NativeStack& aNativeStack)
|
|
|
|
{
|
|
|
|
// This method determines which stackwalker is used for periodic and
|
|
|
|
// synchronous samples. (Backtrace samples are treated differently, see
|
|
|
|
// profiler_suspend_and_sample_thread() for details). The only part of the
|
|
|
|
// ordering that matters is that LUL must precede FRAME_POINTER, because on
|
|
|
|
// Linux they can both be present.
|
|
|
|
#if defined(USE_LUL_STACKWALK)
|
|
|
|
DoLULBacktrace(aLock, aThreadInfo, aRegs, aNativeStack);
|
|
|
|
#elif defined(USE_EHABI_STACKWALK)
|
|
|
|
DoEHABIBacktrace(aLock, aThreadInfo, aRegs, aNativeStack);
|
|
|
|
#elif defined(USE_FRAME_POINTER_STACK_WALK)
|
|
|
|
DoFramePointerBacktrace(aLock, aThreadInfo, aRegs, aNativeStack);
|
|
|
|
#elif defined(USE_MOZ_STACK_WALK)
|
|
|
|
DoMozStackWalkBacktrace(aLock, aThreadInfo, aRegs, aNativeStack);
|
|
|
|
#else
|
|
|
|
#error "Invalid configuration"
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
// Writes some components shared by periodic and synchronous profiles to
|
|
|
|
// ActivePS's ProfileBuffer. (This should only be called from DoSyncSample()
|
|
|
|
// and DoPeriodicSample().)
|
2017-07-05 14:29:29 +03:00
|
|
|
//
|
|
|
|
// The grammar for entry sequences is in a comment above
|
|
|
|
// ProfileBuffer::StreamSamplesToJSON.
|
2017-06-19 02:38:15 +03:00
|
|
|
static inline void
|
|
|
|
DoSharedSample(PSLockRef aLock, bool aIsSynchronous,
|
|
|
|
ThreadInfo& aThreadInfo, const TimeStamp& aNow,
|
|
|
|
const Registers& aRegs, ProfileBuffer::LastSample* aLS,
|
2017-07-13 04:05:34 +03:00
|
|
|
ProfileBuffer& aBuffer)
|
2017-05-19 06:27:38 +03:00
|
|
|
{
|
2017-06-19 02:09:46 +03:00
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
|
|
|
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
MOZ_RELEASE_ASSERT(ActivePS::Exists(aLock));
|
|
|
|
|
2017-07-13 04:05:34 +03:00
|
|
|
aBuffer.AddThreadIdEntry(aThreadInfo.ThreadId(), aLS);
|
2017-06-16 07:22:57 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
TimeDuration delta = aNow - CorePS::ProcessStartTime();
|
2017-07-13 04:05:34 +03:00
|
|
|
aBuffer.AddEntry(ProfileBufferEntry::Time(delta.ToMilliseconds()));
|
2017-05-18 06:07:02 +03:00
|
|
|
|
2017-08-03 18:25:17 +03:00
|
|
|
ProfileBufferCollector collector(aBuffer, ActivePS::Features(aLock));
|
2017-05-18 06:07:02 +03:00
|
|
|
NativeStack nativeStack;
|
2017-03-27 09:04:56 +03:00
|
|
|
#if defined(HAVE_NATIVE_UNWIND)
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureStackWalk(aLock)) {
|
2017-06-19 02:38:15 +03:00
|
|
|
DoNativeBacktrace(aLock, aThreadInfo, aRegs, nativeStack);
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
|
2017-07-25 09:47:14 +03:00
|
|
|
MergeStacks(ActivePS::Features(aLock), aIsSynchronous, aThreadInfo, aRegs,
|
2017-08-03 18:25:17 +03:00
|
|
|
nativeStack, collector);
|
2017-03-27 09:04:56 +03:00
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
2017-07-25 09:47:14 +03:00
|
|
|
MergeStacks(ActivePS::Features(aLock), aIsSynchronous, aThreadInfo, aRegs,
|
2017-08-03 18:25:17 +03:00
|
|
|
nativeStack, collector);
|
2017-05-18 06:07:02 +03:00
|
|
|
|
2017-07-22 05:04:10 +03:00
|
|
|
// We can't walk the whole native stack, but we can record the top frame.
|
2017-05-18 06:07:02 +03:00
|
|
|
if (ActivePS::FeatureLeaf(aLock)) {
|
2017-07-13 04:05:34 +03:00
|
|
|
aBuffer.AddEntry(ProfileBufferEntry::NativeLeafAddr((void*)aRegs.mPC));
|
2017-05-18 06:07:02 +03:00
|
|
|
}
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
2017-06-19 02:38:15 +03:00
|
|
|
}
|
2017-06-16 01:29:19 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
// Writes the components of a synchronous sample to the given ProfileBuffer.
|
|
|
|
static void
|
|
|
|
DoSyncSample(PSLockRef aLock, ThreadInfo& aThreadInfo, const TimeStamp& aNow,
|
2017-07-13 04:05:34 +03:00
|
|
|
const Registers& aRegs, ProfileBuffer& aBuffer)
|
2017-06-19 02:38:15 +03:00
|
|
|
{
|
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
|
|
|
|
|
|
|
DoSharedSample(aLock, /* isSynchronous = */ true, aThreadInfo, aNow, aRegs,
|
|
|
|
/* lastSample = */ nullptr, aBuffer);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Writes the components of a periodic sample to ActivePS's ProfileBuffer.
|
|
|
|
static void
|
|
|
|
DoPeriodicSample(PSLockRef aLock, ThreadInfo& aThreadInfo,
|
|
|
|
const TimeStamp& aNow, const Registers& aRegs,
|
|
|
|
int64_t aRSSMemory, int64_t aUSSMemory)
|
|
|
|
{
|
|
|
|
// WARNING: this function runs within the profiler's "critical section".
|
|
|
|
|
2017-07-13 04:05:34 +03:00
|
|
|
ProfileBuffer& buffer = ActivePS::Buffer(aLock);
|
2017-06-19 02:38:15 +03:00
|
|
|
|
|
|
|
DoSharedSample(aLock, /* isSynchronous = */ false, aThreadInfo, aNow, aRegs,
|
|
|
|
&aThreadInfo.LastSample(), buffer);
|
|
|
|
|
|
|
|
ProfilerMarkerLinkedList* pendingMarkersList =
|
|
|
|
aThreadInfo.RacyInfo()->GetPendingMarkers();
|
|
|
|
while (pendingMarkersList && pendingMarkersList->peek()) {
|
|
|
|
ProfilerMarker* marker = pendingMarkersList->popHead();
|
2017-07-13 04:05:34 +03:00
|
|
|
buffer.AddStoredMarker(marker);
|
|
|
|
buffer.AddEntry(ProfileBufferEntry::Marker(marker));
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
ThreadResponsiveness* resp = aThreadInfo.GetThreadResponsiveness();
|
|
|
|
if (resp && resp->HasData()) {
|
2017-08-01 22:32:18 +03:00
|
|
|
double delta = resp->GetUnresponsiveDuration(
|
|
|
|
(aNow - CorePS::ProcessStartTime()).ToMilliseconds());
|
|
|
|
buffer.AddEntry(ProfileBufferEntry::Responsiveness(delta));
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
if (aRSSMemory != 0) {
|
|
|
|
double rssMemory = static_cast<double>(aRSSMemory);
|
2017-07-13 04:05:34 +03:00
|
|
|
buffer.AddEntry(ProfileBufferEntry::ResidentMemory(rssMemory));
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
if (aUSSMemory != 0) {
|
|
|
|
double ussMemory = static_cast<double>(aUSSMemory);
|
2017-07-13 04:05:34 +03:00
|
|
|
buffer.AddEntry(ProfileBufferEntry::UnsharedMemory(ussMemory));
|
2017-02-09 01:02:41 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
// END sampling/unwinding code
|
2017-02-09 01:02:41 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2017-02-09 09:04:51 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// BEGIN saving/streaming code
|
|
|
|
|
2017-03-14 00:08:38 +03:00
|
|
|
const static uint64_t kJS_MAX_SAFE_UINTEGER = +9007199254740991ULL;
|
|
|
|
|
|
|
|
static int64_t
|
|
|
|
SafeJSInteger(uint64_t aValue) {
|
|
|
|
return aValue <= kJS_MAX_SAFE_UINTEGER ? int64_t(aValue) : -1;
|
|
|
|
}
|
|
|
|
|
2017-02-09 09:04:51 +03:00
|
|
|
static void
|
2017-03-14 00:08:38 +03:00
|
|
|
AddSharedLibraryInfoToStream(JSONWriter& aWriter, const SharedLibrary& aLib)
|
2017-02-09 09:04:51 +03:00
|
|
|
{
|
2017-03-14 00:08:38 +03:00
|
|
|
aWriter.StartObjectElement();
|
|
|
|
aWriter.IntProperty("start", SafeJSInteger(aLib.GetStart()));
|
|
|
|
aWriter.IntProperty("end", SafeJSInteger(aLib.GetEnd()));
|
|
|
|
aWriter.IntProperty("offset", SafeJSInteger(aLib.GetOffset()));
|
2017-03-15 01:59:20 +03:00
|
|
|
aWriter.StringProperty("name", NS_ConvertUTF16toUTF8(aLib.GetModuleName()).get());
|
|
|
|
aWriter.StringProperty("path", NS_ConvertUTF16toUTF8(aLib.GetModulePath()).get());
|
|
|
|
aWriter.StringProperty("debugName", NS_ConvertUTF16toUTF8(aLib.GetDebugName()).get());
|
|
|
|
aWriter.StringProperty("debugPath", NS_ConvertUTF16toUTF8(aLib.GetDebugPath()).get());
|
2017-03-14 00:08:38 +03:00
|
|
|
aWriter.StringProperty("breakpadId", aLib.GetBreakpadId().c_str());
|
2017-03-15 01:59:20 +03:00
|
|
|
aWriter.StringProperty("arch", aLib.GetArch().c_str());
|
2017-03-14 00:08:38 +03:00
|
|
|
aWriter.EndObject();
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
|
|
|
|
2017-03-15 01:59:20 +03:00
|
|
|
void
|
|
|
|
AppendSharedLibraries(JSONWriter& aWriter)
|
2017-02-09 09:04:51 +03:00
|
|
|
{
|
|
|
|
SharedLibraryInfo info = SharedLibraryInfo::GetInfoForSelf();
|
2017-03-15 01:59:20 +03:00
|
|
|
info.SortByAddress();
|
2017-03-14 00:08:38 +03:00
|
|
|
for (size_t i = 0; i < info.GetSize(); i++) {
|
2017-03-15 01:59:20 +03:00
|
|
|
AddSharedLibraryInfoToStream(aWriter, info.GetEntry(i));
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-03 03:40:23 +03:00
|
|
|
#ifdef MOZ_TASK_TRACER
|
|
|
|
static void
|
2017-04-15 07:22:07 +03:00
|
|
|
StreamNameAndThreadId(JSONWriter& aWriter, const char* aName, int aThreadId)
|
2017-04-03 03:40:23 +03:00
|
|
|
{
|
|
|
|
aWriter.StartObjectElement();
|
|
|
|
{
|
|
|
|
if (XRE_GetProcessType() == GeckoProcessType_Plugin) {
|
|
|
|
// TODO Add the proper plugin name
|
|
|
|
aWriter.StringProperty("name", "Plugin");
|
|
|
|
} else {
|
|
|
|
aWriter.StringProperty("name", aName);
|
|
|
|
}
|
|
|
|
aWriter.IntProperty("tid", aThreadId);
|
|
|
|
}
|
|
|
|
aWriter.EndObject();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-02-09 09:04:51 +03:00
|
|
|
static void
|
2017-04-21 06:27:53 +03:00
|
|
|
StreamTaskTracer(PSLockRef aLock, SpliceableJSONWriter& aWriter)
|
2017-02-09 09:04:51 +03:00
|
|
|
{
|
|
|
|
#ifdef MOZ_TASK_TRACER
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists() && ActivePS::Exists(aLock));
|
|
|
|
|
2017-02-09 09:04:51 +03:00
|
|
|
aWriter.StartArrayProperty("data");
|
|
|
|
{
|
|
|
|
UniquePtr<nsTArray<nsCString>> data =
|
2017-06-02 02:41:48 +03:00
|
|
|
tasktracer::GetLoggedData(CorePS::ProcessStartTime());
|
2017-02-09 09:04:51 +03:00
|
|
|
for (uint32_t i = 0; i < data->Length(); ++i) {
|
|
|
|
aWriter.StringElement((data->ElementAt(i)).get());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
aWriter.EndArray();
|
|
|
|
|
|
|
|
aWriter.StartArrayProperty("threads");
|
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
const CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(aLock);
|
2017-04-03 03:40:23 +03:00
|
|
|
for (size_t i = 0; i < liveThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = liveThreads.at(i);
|
2017-04-15 07:22:07 +03:00
|
|
|
StreamNameAndThreadId(aWriter, info->Name(), info->ThreadId());
|
2017-04-03 03:40:23 +03:00
|
|
|
}
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
const CorePS::ThreadVector& deadThreads = CorePS::DeadThreads(aLock);
|
2017-04-03 03:40:23 +03:00
|
|
|
for (size_t i = 0; i < deadThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = deadThreads.at(i);
|
2017-04-15 07:22:07 +03:00
|
|
|
StreamNameAndThreadId(aWriter, info->Name(), info->ThreadId());
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
aWriter.EndArray();
|
|
|
|
|
|
|
|
aWriter.DoubleProperty(
|
2017-06-02 02:41:48 +03:00
|
|
|
"start", static_cast<double>(tasktracer::GetStartTime()));
|
2017-02-09 09:04:51 +03:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2017-07-31 21:23:13 +03:00
|
|
|
StreamMetaJSCustomObject(PSLockRef aLock, SpliceableJSONWriter& aWriter,
|
|
|
|
const TimeStamp& aShutdownTime)
|
2017-02-09 09:04:51 +03:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists() && ActivePS::Exists(aLock));
|
2017-02-09 09:04:51 +03:00
|
|
|
|
2017-10-31 06:32:49 +03:00
|
|
|
aWriter.IntProperty("version", 9);
|
2017-05-29 20:44:28 +03:00
|
|
|
|
|
|
|
// The "startTime" field holds the number of milliseconds since midnight
|
|
|
|
// January 1, 1970 GMT. This grotty code computes (Now - (Now -
|
|
|
|
// ProcessStartTime)) to convert CorePS::ProcessStartTime() into that form.
|
2017-06-02 02:41:48 +03:00
|
|
|
TimeDuration delta = TimeStamp::Now() - CorePS::ProcessStartTime();
|
2017-05-29 20:44:28 +03:00
|
|
|
aWriter.DoubleProperty(
|
|
|
|
"startTime", static_cast<double>(PR_Now()/1000.0 - delta.ToMilliseconds()));
|
|
|
|
|
2017-07-31 21:23:13 +03:00
|
|
|
// Write the shutdownTime field. Unlike startTime, shutdownTime is not an
|
|
|
|
// absolute time stamp: It's relative to startTime. This is consistent with
|
|
|
|
// all other (non-"startTime") times anywhere in the profile JSON.
|
|
|
|
if (aShutdownTime) {
|
|
|
|
aWriter.DoubleProperty("shutdownTime", profiler_time());
|
|
|
|
} else {
|
|
|
|
aWriter.NullProperty("shutdownTime");
|
|
|
|
}
|
|
|
|
|
2017-05-29 20:44:28 +03:00
|
|
|
if (!NS_IsMainThread()) {
|
|
|
|
// Leave the rest of the properties out if we're not on the main thread.
|
|
|
|
// At the moment, the only case in which this function is called on a
|
|
|
|
// background thread is if we're in a content process and are going to
|
|
|
|
// send this profile to the parent process. In that case, the parent
|
|
|
|
// process profile's "meta" object already has the rest of the properties,
|
|
|
|
// and the parent process profile is dumped on that process's main thread.
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
aWriter.DoubleProperty("interval", ActivePS::Interval(aLock));
|
|
|
|
aWriter.IntProperty("stackwalk", ActivePS::FeatureStackWalk(aLock));
|
2017-02-09 09:04:51 +03:00
|
|
|
|
|
|
|
#ifdef DEBUG
|
|
|
|
aWriter.IntProperty("debug", 1);
|
|
|
|
#else
|
|
|
|
aWriter.IntProperty("debug", 0);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
aWriter.IntProperty("gcpoison", JS::IsGCPoisoning() ? 1 : 0);
|
|
|
|
|
|
|
|
bool asyncStacks = Preferences::GetBool("javascript.options.asyncstack");
|
|
|
|
aWriter.IntProperty("asyncstack", asyncStacks);
|
|
|
|
|
|
|
|
aWriter.IntProperty("processType", XRE_GetProcessType());
|
|
|
|
|
|
|
|
nsresult res;
|
|
|
|
nsCOMPtr<nsIHttpProtocolHandler> http =
|
|
|
|
do_GetService(NS_NETWORK_PROTOCOL_CONTRACTID_PREFIX "http", &res);
|
|
|
|
|
|
|
|
if (!NS_FAILED(res)) {
|
|
|
|
nsAutoCString string;
|
|
|
|
|
|
|
|
res = http->GetPlatform(string);
|
|
|
|
if (!NS_FAILED(res)) {
|
|
|
|
aWriter.StringProperty("platform", string.Data());
|
|
|
|
}
|
|
|
|
|
|
|
|
res = http->GetOscpu(string);
|
|
|
|
if (!NS_FAILED(res)) {
|
|
|
|
aWriter.StringProperty("oscpu", string.Data());
|
|
|
|
}
|
|
|
|
|
|
|
|
res = http->GetMisc(string);
|
|
|
|
if (!NS_FAILED(res)) {
|
|
|
|
aWriter.StringProperty("misc", string.Data());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
nsCOMPtr<nsIXULRuntime> runtime = do_GetService("@mozilla.org/xre/runtime;1");
|
|
|
|
if (runtime) {
|
|
|
|
nsAutoCString string;
|
|
|
|
|
|
|
|
res = runtime->GetXPCOMABI(string);
|
|
|
|
if (!NS_FAILED(res))
|
|
|
|
aWriter.StringProperty("abi", string.Data());
|
|
|
|
|
|
|
|
res = runtime->GetWidgetToolkit(string);
|
|
|
|
if (!NS_FAILED(res))
|
|
|
|
aWriter.StringProperty("toolkit", string.Data());
|
|
|
|
}
|
|
|
|
|
|
|
|
nsCOMPtr<nsIXULAppInfo> appInfo =
|
|
|
|
do_GetService("@mozilla.org/xre/app-info;1");
|
|
|
|
|
|
|
|
if (appInfo) {
|
|
|
|
nsAutoCString string;
|
|
|
|
res = appInfo->GetName(string);
|
|
|
|
if (!NS_FAILED(res))
|
|
|
|
aWriter.StringProperty("product", string.Data());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-02-09 09:04:51 +03:00
|
|
|
static void
|
|
|
|
BuildJavaThreadJSObject(SpliceableJSONWriter& aWriter)
|
|
|
|
{
|
|
|
|
aWriter.StringProperty("name", "Java Main Thread");
|
|
|
|
|
|
|
|
aWriter.StartArrayProperty("samples");
|
|
|
|
{
|
|
|
|
for (int sampleId = 0; true; sampleId++) {
|
|
|
|
bool firstRun = true;
|
|
|
|
for (int frameId = 0; true; frameId++) {
|
|
|
|
jni::String::LocalRef frameName =
|
|
|
|
java::GeckoJavaSampler::GetFrameName(0, sampleId, frameId);
|
|
|
|
|
|
|
|
// When we run out of frames, we stop looping.
|
|
|
|
if (!frameName) {
|
|
|
|
// If we found at least one frame, we have objects to close.
|
|
|
|
if (!firstRun) {
|
|
|
|
aWriter.EndArray();
|
|
|
|
aWriter.EndObject();
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
// The first time around, open the sample object and frames array.
|
|
|
|
if (firstRun) {
|
|
|
|
firstRun = false;
|
|
|
|
|
|
|
|
double sampleTime =
|
|
|
|
java::GeckoJavaSampler::GetSampleTime(0, sampleId);
|
|
|
|
|
|
|
|
aWriter.StartObjectElement();
|
|
|
|
aWriter.DoubleProperty("time", sampleTime);
|
|
|
|
|
|
|
|
aWriter.StartArrayProperty("frames");
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add a frame to the sample.
|
|
|
|
aWriter.StartObjectElement();
|
|
|
|
{
|
|
|
|
aWriter.StringProperty("location",
|
|
|
|
frameName->ToCString().BeginReading());
|
|
|
|
}
|
|
|
|
aWriter.EndObject();
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we found no frames for this sample, we are done.
|
|
|
|
if (firstRun) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
aWriter.EndArray();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-07-27 22:04:59 +03:00
|
|
|
static TimeStamp
|
2017-04-21 06:27:53 +03:00
|
|
|
locked_profiler_stream_json_for_this_process(PSLockRef aLock,
|
|
|
|
SpliceableJSONWriter& aWriter,
|
2017-07-31 21:23:13 +03:00
|
|
|
double aSinceTime,
|
|
|
|
bool aIsShuttingDown)
|
2017-02-09 09:04:51 +03:00
|
|
|
{
|
2017-04-09 04:07:52 +03:00
|
|
|
LOG("locked_profiler_stream_json_for_this_process");
|
2017-03-15 02:56:50 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists() && ActivePS::Exists(aLock));
|
2017-02-09 09:04:51 +03:00
|
|
|
|
2017-07-31 21:44:35 +03:00
|
|
|
double collectionStart = profiler_time();
|
|
|
|
|
|
|
|
ProfileBuffer& buffer = ActivePS::Buffer(aLock);
|
|
|
|
|
2017-04-09 04:07:52 +03:00
|
|
|
// Put shared library info
|
|
|
|
aWriter.StartArrayProperty("libs");
|
|
|
|
AppendSharedLibraries(aWriter);
|
|
|
|
aWriter.EndArray();
|
|
|
|
|
|
|
|
// Put meta data
|
|
|
|
aWriter.StartObjectProperty("meta");
|
2017-02-09 09:04:51 +03:00
|
|
|
{
|
2017-07-31 21:23:13 +03:00
|
|
|
StreamMetaJSCustomObject(aLock, aWriter,
|
|
|
|
aIsShuttingDown ? TimeStamp::Now() : TimeStamp());
|
2017-04-09 04:07:52 +03:00
|
|
|
}
|
|
|
|
aWriter.EndObject();
|
2017-02-09 09:04:51 +03:00
|
|
|
|
2017-04-09 04:07:52 +03:00
|
|
|
// Data of TaskTracer doesn't belong in the circular buffer.
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureTaskTracer(aLock)) {
|
2017-04-09 04:07:52 +03:00
|
|
|
aWriter.StartObjectProperty("tasktracer");
|
|
|
|
StreamTaskTracer(aLock, aWriter);
|
2017-02-09 09:04:51 +03:00
|
|
|
aWriter.EndObject();
|
2017-04-09 04:07:52 +03:00
|
|
|
}
|
2017-02-09 09:04:51 +03:00
|
|
|
|
2017-07-27 22:04:59 +03:00
|
|
|
double firstSampleTime = INFINITY;
|
|
|
|
|
2017-04-09 04:07:52 +03:00
|
|
|
// Lists the samples for each thread profile
|
|
|
|
aWriter.StartArrayProperty("threads");
|
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
const CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(aLock);
|
2017-04-19 07:47:18 +03:00
|
|
|
for (size_t i = 0; i < liveThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = liveThreads.at(i);
|
|
|
|
if (!info->IsBeingProfiled()) {
|
|
|
|
continue;
|
2017-04-03 03:40:23 +03:00
|
|
|
}
|
2017-07-27 22:04:59 +03:00
|
|
|
double thisThreadFirstSampleTime =
|
2017-07-31 21:44:35 +03:00
|
|
|
info->StreamJSON(buffer, aWriter,
|
2017-07-27 22:04:59 +03:00
|
|
|
CorePS::ProcessStartTime(), aSinceTime);
|
|
|
|
firstSampleTime = std::min(thisThreadFirstSampleTime, firstSampleTime);
|
2017-04-19 07:47:18 +03:00
|
|
|
}
|
2017-04-03 03:40:23 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
const CorePS::ThreadVector& deadThreads = CorePS::DeadThreads(aLock);
|
2017-04-19 07:47:18 +03:00
|
|
|
for (size_t i = 0; i < deadThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = deadThreads.at(i);
|
|
|
|
MOZ_ASSERT(info->IsBeingProfiled());
|
2017-07-27 22:04:59 +03:00
|
|
|
double thisThreadFirstSampleTime =
|
2017-07-31 21:44:35 +03:00
|
|
|
info->StreamJSON(buffer, aWriter,
|
|
|
|
CorePS::ProcessStartTime(), aSinceTime);
|
2017-07-27 22:04:59 +03:00
|
|
|
firstSampleTime = std::min(thisThreadFirstSampleTime, firstSampleTime);
|
2017-04-19 07:47:18 +03:00
|
|
|
}
|
2017-02-09 09:04:51 +03:00
|
|
|
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureJava(aLock)) {
|
2017-04-09 04:07:52 +03:00
|
|
|
java::GeckoJavaSampler::Pause();
|
2017-02-09 09:04:51 +03:00
|
|
|
|
2017-04-09 04:07:52 +03:00
|
|
|
aWriter.Start();
|
|
|
|
{
|
|
|
|
BuildJavaThreadJSObject(aWriter);
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
2017-04-09 04:07:52 +03:00
|
|
|
aWriter.End();
|
2017-02-09 09:04:51 +03:00
|
|
|
|
2017-04-09 04:07:52 +03:00
|
|
|
java::GeckoJavaSampler::Unpause();
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
2017-04-09 04:07:52 +03:00
|
|
|
#endif
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
2017-04-09 04:07:52 +03:00
|
|
|
aWriter.EndArray();
|
2017-07-27 22:04:59 +03:00
|
|
|
|
2017-10-13 07:32:14 +03:00
|
|
|
aWriter.StartArrayProperty("pausedRanges");
|
2017-07-31 21:44:35 +03:00
|
|
|
{
|
|
|
|
buffer.StreamPausedRangesToJSON(aWriter, aSinceTime);
|
|
|
|
}
|
|
|
|
aWriter.EndArray();
|
|
|
|
|
|
|
|
double collectionEnd = profiler_time();
|
|
|
|
|
|
|
|
// Record timestamps for the collection into the buffer, so that consumers
|
|
|
|
// know why we didn't collect any samples for its duration.
|
|
|
|
// We put these entries into the buffer after we've collected the profile,
|
|
|
|
// so they'll be visible for the *next* profile collection (if they haven't
|
|
|
|
// been overwritten due to buffer wraparound by then).
|
|
|
|
buffer.AddEntry(ProfileBufferEntry::CollectionStart(collectionStart));
|
|
|
|
buffer.AddEntry(ProfileBufferEntry::CollectionEnd(collectionEnd));
|
|
|
|
|
2017-07-27 22:04:59 +03:00
|
|
|
if (firstSampleTime != INFINITY) {
|
|
|
|
return CorePS::ProcessStartTime() +
|
|
|
|
TimeDuration::FromMilliseconds(firstSampleTime);
|
|
|
|
}
|
|
|
|
|
|
|
|
return TimeStamp();
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
|
|
|
|
2017-04-12 23:40:03 +03:00
|
|
|
bool
|
2017-07-27 22:04:59 +03:00
|
|
|
profiler_stream_json_for_this_process(SpliceableJSONWriter& aWriter,
|
|
|
|
double aSinceTime,
|
2017-07-31 21:23:13 +03:00
|
|
|
bool aIsShuttingDown,
|
2017-07-27 22:04:59 +03:00
|
|
|
TimeStamp* aOutFirstSampleTime)
|
2017-04-12 23:45:28 +03:00
|
|
|
{
|
|
|
|
LOG("profiler_stream_json_for_this_process");
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-04-12 23:45:28 +03:00
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2017-04-12 23:45:28 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
2017-04-12 23:45:28 +03:00
|
|
|
return false;
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
2017-04-12 23:45:28 +03:00
|
|
|
|
2017-07-27 22:04:59 +03:00
|
|
|
TimeStamp firstSampleTime =
|
2017-07-31 21:23:13 +03:00
|
|
|
locked_profiler_stream_json_for_this_process(lock, aWriter, aSinceTime,
|
|
|
|
aIsShuttingDown);
|
2017-07-27 22:04:59 +03:00
|
|
|
|
|
|
|
if (aOutFirstSampleTime) {
|
|
|
|
*aOutFirstSampleTime = firstSampleTime;
|
|
|
|
}
|
|
|
|
|
2017-04-12 23:45:28 +03:00
|
|
|
return true;
|
2017-02-09 09:04:51 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// END saving/streaming code
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2017-02-16 05:59:35 +03:00
|
|
|
static void
|
2017-03-23 05:44:15 +03:00
|
|
|
PrintUsageThenExit(int aExitCode)
|
2017-02-16 05:59:35 +03:00
|
|
|
{
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
MOZ_RELEASE_ASSERT(NS_IsMainThread());
|
|
|
|
|
2017-03-14 08:49:12 +03:00
|
|
|
printf(
|
|
|
|
"\n"
|
|
|
|
"Profiler environment variable usage:\n"
|
|
|
|
"\n"
|
|
|
|
" MOZ_PROFILER_HELP\n"
|
|
|
|
" If set to any value, prints this message.\n"
|
|
|
|
"\n"
|
|
|
|
" MOZ_LOG\n"
|
|
|
|
" Enables logging. The levels of logging available are\n"
|
|
|
|
" 'prof:3' (least verbose), 'prof:4', 'prof:5' (most verbose).\n"
|
|
|
|
"\n"
|
|
|
|
" MOZ_PROFILER_STARTUP\n"
|
|
|
|
" If set to any value, starts the profiler immediately on start-up.\n"
|
|
|
|
" Useful if you want profile code that runs very early.\n"
|
|
|
|
"\n"
|
2017-03-23 05:44:15 +03:00
|
|
|
" MOZ_PROFILER_STARTUP_ENTRIES=<1..>\n"
|
|
|
|
" If MOZ_PROFILER_STARTUP is set, specifies the number of entries in\n"
|
|
|
|
" the profiler's circular buffer when the profiler is first started.\n"
|
|
|
|
" If unset, the platform default is used.\n"
|
|
|
|
"\n"
|
|
|
|
" MOZ_PROFILER_STARTUP_INTERVAL=<1..1000>\n"
|
|
|
|
" If MOZ_PROFILER_STARTUP is set, specifies the sample interval,\n"
|
|
|
|
" measured in milliseconds, when the profiler is first started.\n"
|
|
|
|
" If unset, the platform default is used.\n"
|
|
|
|
"\n"
|
2017-07-24 22:52:04 +03:00
|
|
|
" MOZ_PROFILER_STARTUP_FEATURES_BITFIELD=<Number>\n"
|
|
|
|
" If MOZ_PROFILER_STARTUP is set, specifies the profiling features, as\n"
|
|
|
|
" the integer value of the features bitfield.\n"
|
|
|
|
" If unset, the value from MOZ_PROFILER_STARTUP_FEATURES is used.\n"
|
|
|
|
"\n"
|
2017-07-21 03:43:19 +03:00
|
|
|
" MOZ_PROFILER_STARTUP_FEATURES=<Features>\n"
|
2017-07-24 22:52:04 +03:00
|
|
|
" If MOZ_PROFILER_STARTUP is set, specifies the profiling features, as\n"
|
|
|
|
" a comma-separated list of strings.\n"
|
|
|
|
" Ignored if MOZ_PROFILER_STARTUP_FEATURES_BITFIELD is set.\n"
|
2017-07-21 03:43:19 +03:00
|
|
|
" If unset, the platform default is used.\n"
|
|
|
|
"\n"
|
2017-07-24 22:40:42 +03:00
|
|
|
" MOZ_PROFILER_STARTUP_FILTERS=<Filters>\n"
|
|
|
|
" If MOZ_PROFILER_STARTUP is set, specifies the thread filters, as a\n"
|
|
|
|
" comma-separated list of strings. A given thread will be sampled if any\n"
|
|
|
|
" of the filters is a case-insensitive substring of the thread name.\n"
|
|
|
|
" If unset, a default is used.\n"
|
|
|
|
"\n"
|
2017-03-14 08:49:12 +03:00
|
|
|
" MOZ_PROFILER_SHUTDOWN\n"
|
|
|
|
" If set, the profiler saves a profile to the named file on shutdown.\n"
|
|
|
|
"\n"
|
|
|
|
" MOZ_PROFILER_LUL_TEST\n"
|
2017-07-13 02:35:14 +03:00
|
|
|
" If set to any value, runs LUL unit tests at startup.\n"
|
2017-03-14 08:49:12 +03:00
|
|
|
"\n"
|
|
|
|
" This platform %s native unwinding.\n"
|
|
|
|
"\n",
|
2017-03-27 09:04:56 +03:00
|
|
|
#if defined(HAVE_NATIVE_UNWIND)
|
|
|
|
"supports"
|
|
|
|
#else
|
|
|
|
"does not support"
|
|
|
|
#endif
|
2017-03-14 08:49:12 +03:00
|
|
|
);
|
2017-02-23 06:26:46 +03:00
|
|
|
|
|
|
|
exit(aExitCode);
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
|
|
|
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
// BEGIN Sampler
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
|
|
|
|
#if defined(GP_OS_linux) || defined(GP_OS_android)
|
|
|
|
struct SigHandlerCoordinator;
|
|
|
|
#endif
|
|
|
|
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
// Sampler performs setup and teardown of the state required to sample with the
|
|
|
|
// profiler. Sampler may exist when ActivePS is not present.
|
|
|
|
//
|
|
|
|
// SuspendAndSampleAndResumeThread must only be called from a single thread,
|
|
|
|
// and must not sample the thread it is being called from. A separate Sampler
|
|
|
|
// instance must be used for each thread which wants to capture samples.
|
|
|
|
|
|
|
|
// WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
|
|
|
|
//
|
|
|
|
// With the exception of SamplerThread, all Sampler objects must be Disable-d
|
|
|
|
// before releasing the lock which was used to create them. This avoids races
|
|
|
|
// on linux with the SIGPROF signal handler.
|
|
|
|
|
|
|
|
class Sampler
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
// Sets up the profiler such that it can begin sampling.
|
|
|
|
explicit Sampler(PSLockRef aLock);
|
|
|
|
|
|
|
|
// Disable the sampler, restoring it to its previous state. This must be
|
|
|
|
// called once, and only once, before the Sampler is destroyed.
|
|
|
|
void Disable(PSLockRef aLock);
|
|
|
|
|
2017-05-18 06:07:02 +03:00
|
|
|
// This method suspends and resumes the samplee thread. It calls the passed-in
|
2017-06-19 02:38:15 +03:00
|
|
|
// function-like object aProcessRegs (passing it a populated |const
|
|
|
|
// Registers&| arg) while the samplee thread is suspended.
|
2017-05-18 06:07:02 +03:00
|
|
|
//
|
|
|
|
// Func must be a function-like object of type `void()`.
|
|
|
|
template<typename Func>
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
void SuspendAndSampleAndResumeThread(PSLockRef aLock,
|
2017-06-19 02:38:15 +03:00
|
|
|
const ThreadInfo& aThreadInfo,
|
|
|
|
const Func& aProcessRegs);
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
|
|
|
|
private:
|
|
|
|
#if defined(GP_OS_linux) || defined(GP_OS_android)
|
|
|
|
// Used to restore the SIGPROF handler when ours is removed.
|
|
|
|
struct sigaction mOldSigprofHandler;
|
|
|
|
|
|
|
|
// This process' ID. Needed as an argument for tgkill in
|
|
|
|
// SuspendAndSampleAndResumeThread.
|
|
|
|
int mMyPid;
|
|
|
|
|
|
|
|
// The sampler thread's ID. Used to assert that it is not sampling itself,
|
|
|
|
// which would lead to deadlock.
|
|
|
|
int mSamplerTid;
|
|
|
|
|
|
|
|
public:
|
|
|
|
// This is the one-and-only variable used to communicate between the sampler
|
|
|
|
// thread and the samplee thread's signal handler. It's static because the
|
|
|
|
// samplee thread's signal handler is static.
|
|
|
|
static struct SigHandlerCoordinator* sSigHandlerCoordinator;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
// END Sampler
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// BEGIN SamplerThread
|
|
|
|
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
// The sampler thread controls sampling and runs whenever the profiler is
|
|
|
|
// active. It periodically runs through all registered threads, finds those
|
|
|
|
// that should be sampled, then pauses and samples them.
|
|
|
|
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
class SamplerThread : public Sampler
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
{
|
|
|
|
public:
|
|
|
|
// Creates a sampler thread, but doesn't start it.
|
2017-04-21 06:27:53 +03:00
|
|
|
SamplerThread(PSLockRef aLock, uint32_t aActivityGeneration,
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
double aIntervalMilliseconds);
|
|
|
|
~SamplerThread();
|
|
|
|
|
|
|
|
// This runs on (is!) the sampler thread.
|
|
|
|
void Run();
|
|
|
|
|
|
|
|
// This runs on the main thread.
|
2017-04-21 06:27:53 +03:00
|
|
|
void Stop(PSLockRef aLock);
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
|
|
|
|
private:
|
2017-05-16 18:03:33 +03:00
|
|
|
// This suspends the calling thread for the given number of microseconds.
|
|
|
|
// Best effort timing.
|
2017-05-16 18:06:18 +03:00
|
|
|
void SleepMicro(uint32_t aMicroseconds);
|
2017-05-16 18:03:33 +03:00
|
|
|
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
// The activity generation, for detecting when the sampler thread must stop.
|
|
|
|
const uint32_t mActivityGeneration;
|
|
|
|
|
|
|
|
// The interval between samples, measured in microseconds.
|
|
|
|
const int mIntervalMicroseconds;
|
|
|
|
|
|
|
|
// The OS-specific handle for the sampler thread.
|
|
|
|
#if defined(GP_OS_windows)
|
|
|
|
HANDLE mThread;
|
|
|
|
#elif defined(GP_OS_darwin) || defined(GP_OS_linux) || defined(GP_OS_android)
|
|
|
|
pthread_t mThread;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
SamplerThread(const SamplerThread&) = delete;
|
|
|
|
void operator=(const SamplerThread&) = delete;
|
|
|
|
};
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// This function is required because we need to create a SamplerThread within
|
|
|
|
// ActivePS's constructor, but SamplerThread is defined after ActivePS. It
|
|
|
|
// could probably be removed by moving some code around.
|
|
|
|
static SamplerThread*
|
|
|
|
NewSamplerThread(PSLockRef aLock, uint32_t aGeneration, double aInterval)
|
|
|
|
{
|
|
|
|
return new SamplerThread(aLock, aGeneration, aInterval);
|
|
|
|
}
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
|
|
|
|
// This function is the sampler thread. This implementation is used for all
|
|
|
|
// targets.
|
|
|
|
void
|
|
|
|
SamplerThread::Run()
|
|
|
|
{
|
2017-05-02 05:56:47 +03:00
|
|
|
PR_SetCurrentThreadName("SamplerThread");
|
|
|
|
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
// This will be positive if we are running behind schedule (sampling less
|
|
|
|
// frequently than desired) and negative if we are ahead of schedule.
|
|
|
|
TimeDuration lastSleepOvershoot = 0;
|
|
|
|
TimeStamp sampleStart = TimeStamp::Now();
|
|
|
|
|
|
|
|
while (true) {
|
|
|
|
// This scope is for |lock|. It ends before we sleep below.
|
|
|
|
{
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
// At this point profiler_stop() might have been called, and
|
2017-04-21 06:28:23 +03:00
|
|
|
// profiler_start() might have been called on another thread. If this
|
|
|
|
// happens the generation won't match.
|
|
|
|
if (ActivePS::Generation(lock) != mActivityGeneration) {
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-07-13 04:05:34 +03:00
|
|
|
ActivePS::Buffer(lock).DeleteExpiredStoredMarkers();
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::IsPaused(lock)) {
|
|
|
|
const CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(lock);
|
2017-04-03 03:40:23 +03:00
|
|
|
for (uint32_t i = 0; i < liveThreads.size(); i++) {
|
2017-04-21 06:28:23 +03:00
|
|
|
ThreadInfo* info = liveThreads.at(i);
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
|
2017-04-18 05:46:54 +03:00
|
|
|
if (!info->IsBeingProfiled()) {
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
// We are not interested in profiling this thread.
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// If the thread is asleep and has been sampled before in the same
|
|
|
|
// sleep episode, find and copy the previous sample, as that's
|
|
|
|
// cheaper than taking a new sample.
|
2017-04-27 00:36:15 +03:00
|
|
|
if (info->RacyInfo()->CanDuplicateLastSampleDueToSleep()) {
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
bool dup_ok =
|
2017-07-13 04:05:34 +03:00
|
|
|
ActivePS::Buffer(lock).DuplicateLastSample(
|
2017-06-01 03:22:20 +03:00
|
|
|
info->ThreadId(), CorePS::ProcessStartTime(),
|
2017-04-21 06:23:34 +03:00
|
|
|
info->LastSample());
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
if (dup_ok) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-05 08:53:13 +03:00
|
|
|
// We only track responsiveness for the main thread.
|
|
|
|
if (info->IsMainThread()) {
|
|
|
|
info->GetThreadResponsiveness()->Update();
|
|
|
|
}
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
|
2017-04-03 03:40:23 +03:00
|
|
|
// We only get the memory measurements once for all live threads.
|
2017-03-31 02:35:54 +03:00
|
|
|
int64_t rssMemory = 0;
|
|
|
|
int64_t ussMemory = 0;
|
2017-04-21 06:28:23 +03:00
|
|
|
if (i == 0 && ActivePS::FeatureMemory(lock)) {
|
2017-03-31 02:35:54 +03:00
|
|
|
rssMemory = nsMemoryReporterManager::ResidentFast();
|
2017-03-24 07:09:05 +03:00
|
|
|
#if defined(GP_OS_linux) || defined(GP_OS_android)
|
2017-03-31 02:35:54 +03:00
|
|
|
ussMemory = nsMemoryReporterManager::ResidentUnique();
|
2017-03-24 07:09:05 +03:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
TimeStamp now = TimeStamp::Now();
|
|
|
|
SuspendAndSampleAndResumeThread(lock, *info,
|
|
|
|
[&](const Registers& aRegs) {
|
|
|
|
DoPeriodicSample(lock, *info, now, aRegs, rssMemory, ussMemory);
|
|
|
|
});
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(USE_LUL_STACKWALK)
|
|
|
|
// The LUL unwind object accumulates frame statistics. Periodically we
|
|
|
|
// should poke it to give it a chance to print those statistics. This
|
|
|
|
// involves doing I/O (fprintf, __android_log_print, etc.) and so
|
|
|
|
// can't safely be done from the critical section inside
|
|
|
|
// SuspendAndSampleAndResumeThread, which is why it is done here.
|
2017-07-13 02:35:14 +03:00
|
|
|
CorePS::Lul(lock)->MaybeShowStats();
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// gPSMutex is not held after this point.
|
|
|
|
|
|
|
|
// Calculate how long a sleep to request. After the sleep, measure how
|
|
|
|
// long we actually slept and take the difference into account when
|
|
|
|
// calculating the sleep interval for the next iteration. This is an
|
|
|
|
// attempt to keep "to schedule" in the presence of inaccuracy of the
|
|
|
|
// actual sleep intervals.
|
|
|
|
TimeStamp targetSleepEndTime =
|
|
|
|
sampleStart + TimeDuration::FromMicroseconds(mIntervalMicroseconds);
|
|
|
|
TimeStamp beforeSleep = TimeStamp::Now();
|
|
|
|
TimeDuration targetSleepDuration = targetSleepEndTime - beforeSleep;
|
|
|
|
double sleepTime = std::max(0.0, (targetSleepDuration -
|
|
|
|
lastSleepOvershoot).ToMicroseconds());
|
2017-05-16 18:06:18 +03:00
|
|
|
SleepMicro(static_cast<uint32_t>(sleepTime));
|
Bug 1344169 - Factor out the common parts of SamplerThread::Run(). r=n.nethercote.
All three platform-*.cpp files have similar structure, most especially for
SamplerThread::Run(), with considerable duplication. This patch factors out
the common parts into a single implementation in platform.cpp.
* The top level structure of class SamplerThread has been moved to
platform.cpp.
* The class has some target-dependent fields, relating to signal handling and
thread identity.
* There's a single implementation of Run() in platform.cpp.
* AllocPlatformData() and PlatformDataDestructor::operator() have also been
commoned up and moved into platform.cpp.
* Time units in SamplerThread have been tidied up. We now use microseconds
throughout, except in the constructor. All time interval field and variable
names incorporate the unit (microseconds/milliseconds) for clarity. The
Windows uses of such values are scaled up/down by 1000 accordingly.
* The pre-existing MacOS Run() implementation contained logic that attempted
to keep "to schedule" in the presence of inaccuracy in the actual sleep
intervals. This now applies to all targets. A couple of comments on this
code have been added.
* platform-{win32,macos,linux-android}.cpp have had their Run() methods
removed, and all other methods placed in the same sequences, to the extent
that is possible.
* In the Win32 and MacOS implementations, Thread::SampleContext has been
renamed to Thread::SuspendSampleAndResumeThread as that better describes
what it does. In the Linux/Android implementation there was no such
separate method, so one has been created.
* The three Thread::SuspendSampleAndResumeThread methods have been commented
in such a way as to emphasise their identical top level structure.
* The point in platform.cpp where platform-{win32,macos,linux-android}.cpp are
#included has been moved slightly earlier in the file, into the
SamplerThread encampment, as that seems like a better place for it.
--HG--
extra : rebase_source : 0f93e15967b810c09e645fa593dbf85f94b53a9b
2017-03-10 18:10:14 +03:00
|
|
|
sampleStart = TimeStamp::Now();
|
|
|
|
lastSleepOvershoot =
|
|
|
|
sampleStart - (beforeSleep + TimeDuration::FromMicroseconds(sleepTime));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// We #include these files directly because it means those files can use
|
|
|
|
// declarations from this file trivially. These provide target-specific
|
|
|
|
// implementations of all SamplerThread methods except Run().
|
|
|
|
#if defined(GP_OS_windows)
|
|
|
|
# include "platform-win32.cpp"
|
|
|
|
#elif defined(GP_OS_darwin)
|
|
|
|
# include "platform-macos.cpp"
|
|
|
|
#elif defined(GP_OS_linux) || defined(GP_OS_android)
|
|
|
|
# include "platform-linux-android.cpp"
|
|
|
|
#else
|
|
|
|
# error "bad platform"
|
|
|
|
#endif
|
|
|
|
|
|
|
|
UniquePlatformData
|
|
|
|
AllocPlatformData(int aThreadId)
|
|
|
|
{
|
|
|
|
return UniquePlatformData(new PlatformData(aThreadId));
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
PlatformDataDestructor::operator()(PlatformData* aData)
|
|
|
|
{
|
|
|
|
delete aData;
|
|
|
|
}
|
|
|
|
|
|
|
|
// END SamplerThread
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2013-03-26 01:57:28 +04:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// BEGIN externally visible functions
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
MOZ_DEFINE_MALLOC_SIZE_OF(GeckoProfilerMallocSizeOf)
|
2017-01-30 04:37:26 +03:00
|
|
|
|
|
|
|
NS_IMETHODIMP
|
|
|
|
GeckoProfilerReporter::CollectReports(nsIHandleReportCallback* aHandleReport,
|
|
|
|
nsISupports* aData, bool aAnonymize)
|
|
|
|
{
|
|
|
|
MOZ_RELEASE_ASSERT(NS_IsMainThread());
|
|
|
|
|
2017-03-14 08:13:55 +03:00
|
|
|
size_t profSize = 0;
|
2017-07-13 02:35:14 +03:00
|
|
|
size_t lulSize = 0;
|
2017-02-09 07:00:29 +03:00
|
|
|
|
2017-03-14 08:13:55 +03:00
|
|
|
{
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2017-02-09 07:00:29 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (CorePS::Exists()) {
|
2017-07-13 02:35:14 +03:00
|
|
|
CorePS::AddSizeOf(lock, GeckoProfilerMallocSizeOf, profSize, lulSize);
|
2017-04-21 06:28:23 +03:00
|
|
|
}
|
2017-02-08 03:36:05 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::Exists(lock)) {
|
|
|
|
profSize += ActivePS::SizeOf(lock, GeckoProfilerMallocSizeOf);
|
2017-03-14 08:13:55 +03:00
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
2017-02-09 07:00:29 +03:00
|
|
|
|
2017-03-14 08:13:55 +03:00
|
|
|
MOZ_COLLECT_REPORT(
|
|
|
|
"explicit/profiler/profiler-state", KIND_HEAP, UNITS_BYTES, profSize,
|
2017-04-21 06:23:34 +03:00
|
|
|
"Memory used by the Gecko Profiler's global state (excluding memory used "
|
|
|
|
"by LUL).");
|
2017-03-14 08:13:55 +03:00
|
|
|
|
|
|
|
#if defined(USE_LUL_STACKWALK)
|
|
|
|
MOZ_COLLECT_REPORT(
|
|
|
|
"explicit/profiler/lul", KIND_HEAP, UNITS_BYTES, lulSize,
|
|
|
|
"Memory used by LUL, a stack unwinder used by the Gecko Profiler.");
|
|
|
|
#endif
|
|
|
|
|
2017-01-30 04:37:26 +03:00
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
NS_IMPL_ISUPPORTS(GeckoProfilerReporter, nsIMemoryReporter)
|
|
|
|
|
2017-07-20 21:49:35 +03:00
|
|
|
static bool
|
|
|
|
HasFeature(const char** aFeatures, uint32_t aFeatureCount, const char* aFeature)
|
|
|
|
{
|
|
|
|
for (size_t i = 0; i < aFeatureCount; i++) {
|
|
|
|
if (strcmp(aFeatures[i], aFeature) == 0) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
uint32_t
|
|
|
|
ParseFeaturesFromStringArray(const char** aFeatures, uint32_t aFeatureCount)
|
|
|
|
{
|
|
|
|
#define ADD_FEATURE_BIT(n_, str_, Name_) \
|
|
|
|
if (HasFeature(aFeatures, aFeatureCount, str_)) { \
|
|
|
|
features |= ProfilerFeature::Name_; \
|
|
|
|
}
|
|
|
|
|
|
|
|
uint32_t features = 0;
|
|
|
|
PROFILER_FOR_EACH_FEATURE(ADD_FEATURE_BIT)
|
|
|
|
|
|
|
|
#undef ADD_FEATURE_BIT
|
|
|
|
|
|
|
|
return features;
|
|
|
|
}
|
|
|
|
|
2017-04-27 00:36:13 +03:00
|
|
|
// Find the ThreadInfo for the current thread. This should only be called in
|
|
|
|
// places where TLSInfo can't be used. On success, *aIndexOut is set to the
|
|
|
|
// index if it is non-null.
|
2017-03-24 01:24:45 +03:00
|
|
|
static ThreadInfo*
|
2017-04-21 06:27:53 +03:00
|
|
|
FindLiveThreadInfo(PSLockRef aLock, int* aIndexOut = nullptr)
|
2017-02-07 06:24:33 +03:00
|
|
|
{
|
2017-04-27 00:36:13 +03:00
|
|
|
ThreadInfo* ret = nullptr;
|
2017-10-04 07:04:24 +03:00
|
|
|
int id = Thread::GetCurrentId();
|
2017-04-21 06:28:23 +03:00
|
|
|
const CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(aLock);
|
2017-04-03 03:40:23 +03:00
|
|
|
for (uint32_t i = 0; i < liveThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = liveThreads.at(i);
|
|
|
|
if (info->ThreadId() == id) {
|
2017-03-24 01:24:45 +03:00
|
|
|
if (aIndexOut) {
|
|
|
|
*aIndexOut = i;
|
|
|
|
}
|
2017-04-27 00:36:13 +03:00
|
|
|
ret = info;
|
|
|
|
break;
|
2017-02-07 06:24:33 +03:00
|
|
|
}
|
|
|
|
}
|
2017-04-27 00:36:13 +03:00
|
|
|
|
|
|
|
return ret;
|
2017-03-24 01:24:45 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2017-05-31 08:14:50 +03:00
|
|
|
locked_register_thread(PSLockRef aLock, const char* aName, void* aStackTop)
|
2017-03-24 01:24:45 +03:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-03-24 01:24:45 +03:00
|
|
|
|
2017-04-03 03:40:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!FindLiveThreadInfo(aLock));
|
2017-02-07 06:24:33 +03:00
|
|
|
|
2017-04-27 00:36:13 +03:00
|
|
|
if (!TLSInfo::Init(aLock)) {
|
2017-03-07 08:54:56 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-03-24 01:24:45 +03:00
|
|
|
ThreadInfo* info = new ThreadInfo(aName, Thread::GetCurrentId(),
|
2017-05-31 08:14:50 +03:00
|
|
|
NS_IsMainThread(), aStackTop);
|
2017-04-27 00:36:13 +03:00
|
|
|
TLSInfo::SetInfo(aLock, info);
|
2017-04-06 02:40:28 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::Exists(aLock) && ActivePS::ShouldProfileThread(aLock, info)) {
|
2017-04-18 05:46:54 +03:00
|
|
|
info->StartProfiling();
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureJS(aLock)) {
|
2017-04-27 00:36:15 +03:00
|
|
|
// This StartJSSampling() call is on-thread, so we can poll manually to
|
2017-04-06 00:44:59 +03:00
|
|
|
// start JS sampling immediately.
|
2017-04-27 00:36:17 +03:00
|
|
|
info->StartJSSampling();
|
|
|
|
info->PollJSSampling();
|
2017-04-06 00:44:59 +03:00
|
|
|
}
|
Bug 1345262 (part 5) - Fix how JS sampling is started/stopped by the profiler. r=mstange,djvj.
Currently, JS sampling has major problems.
- JS sampling is enabled for all JS threads from the thread that runs
locked_profiler_start() -- currently only the main thread -- but the JS
engine can't handle enabling from off-thread, and asserts. This makes
profiling workers impossible in a debug build.
- No JS thread will be JS sampled unless enableJSSampling() is called, but that
only happens in locked_profiler_start(). That means any worker threads
created while the profiler is active won't be JS sampled.
- Only the thread that runs locked_profiler_stop() -- currently only the main
thread -- ever calls disableJSSampling(). This means that worker threads that
start being JS sampled never stop being JS sampled.
This patch fixes these three problems in the following ways.
- locked_profiler_start() now sets a flag in PseudoStack that indicates
JS sampling is desired, but doesn't directly enable it. Instead, the JS
thread polls that flag and enables JS sampling itself when it sees the flag
is set. The polling is done by the interrupt callback. There was already a
flag of this sort (mJSSampling) but the new one is better.
This required adding a call to profiler_js_operation_callback() to the
InterruptCallback() in XPCJSContext.cpp. (In comparison, the
InterruptCallback() in dom/workers/RuntimeService.cpp already had such a
call.)
- RegisterCurrentThread() now requests JS sampling of a JS thread when the
profiler is active, the thread is being profiled, and JS sampling is enabled.
- locked_profiler_stop() now calls stopJSSampling() on all live threads.
The patch makes the following smaller changes as well.
- Renames profiler_js_operation_callback() as profiler_js_interrupt_callback(),
because "interrupt callback" is the standard name (viz.
JS_AddInterruptCallback()).
- Calls js::RegisterContextProfilingEventMarker() with nullptr when stopping
JS sampling, so that ProfilerJSEventMarker won't fire unnecessarily.
- Some minor formatting changes.
--HG--
extra : rebase_source : 372f94c963a9e5b2493389892499b1ca205ebc2f
2017-03-10 01:04:23 +03:00
|
|
|
}
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
CorePS::LiveThreads(aLock).push_back(info);
|
2017-02-07 06:24:33 +03:00
|
|
|
}
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
static void
|
2017-05-31 00:07:56 +03:00
|
|
|
NotifyObservers(const char* aTopic, nsISupports* aSubject = nullptr)
|
2013-03-26 01:57:28 +04:00
|
|
|
{
|
2017-05-31 00:07:56 +03:00
|
|
|
if (!NS_IsMainThread()) {
|
|
|
|
// Dispatch a task to the main thread that notifies observers.
|
|
|
|
// If NotifyObservers is called both on and off the main thread within a
|
|
|
|
// short time, the order of the notifications can be different from the
|
|
|
|
// order of the calls to NotifyObservers.
|
|
|
|
// Getting the order 100% right isn't that important at the moment, because
|
|
|
|
// these notifications are only observed in the parent process, where the
|
|
|
|
// profiler_* functions are currently only called on the main thread.
|
|
|
|
nsCOMPtr<nsISupports> subject = aSubject;
|
2017-06-12 22:34:10 +03:00
|
|
|
NS_DispatchToMainThread(NS_NewRunnableFunction(
|
|
|
|
"NotifyObservers", [=] { NotifyObservers(aTopic, subject); }));
|
2017-03-14 02:03:33 +03:00
|
|
|
return;
|
|
|
|
}
|
2015-12-19 00:12:47 +03:00
|
|
|
|
2017-05-31 00:07:56 +03:00
|
|
|
if (nsCOMPtr<nsIObserverService> os = services::GetObserverService()) {
|
|
|
|
os->NotifyObservers(aSubject, aTopic, nullptr);
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
2017-05-31 00:07:56 +03:00
|
|
|
}
|
2015-06-17 05:28:00 +03:00
|
|
|
|
2017-05-31 00:07:56 +03:00
|
|
|
static void
|
|
|
|
NotifyProfilerStarted(const int aEntries, double aInterval, uint32_t aFeatures,
|
|
|
|
const char** aFilters, uint32_t aFilterCount)
|
|
|
|
{
|
2017-04-21 06:27:55 +03:00
|
|
|
nsTArray<nsCString> filtersArray;
|
2017-03-14 02:03:33 +03:00
|
|
|
for (size_t i = 0; i < aFilterCount; ++i) {
|
2017-04-21 06:27:55 +03:00
|
|
|
filtersArray.AppendElement(aFilters[i]);
|
2017-03-14 02:03:33 +03:00
|
|
|
}
|
2017-02-07 06:24:33 +03:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
nsCOMPtr<nsIProfilerStartParams> params =
|
2017-05-01 07:23:34 +03:00
|
|
|
new nsProfilerStartParams(aEntries, aInterval, aFeatures, filtersArray);
|
2017-02-07 06:24:33 +03:00
|
|
|
|
2017-05-30 22:06:14 +03:00
|
|
|
ProfilerParent::ProfilerStarted(params);
|
2017-05-31 00:07:56 +03:00
|
|
|
NotifyObservers("profiler-started", params);
|
2017-03-14 02:03:33 +03:00
|
|
|
}
|
2013-03-26 01:57:28 +04:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
static void
|
2017-04-21 06:27:53 +03:00
|
|
|
locked_profiler_start(PSLockRef aLock, const int aEntries, double aInterval,
|
2017-05-01 07:23:34 +03:00
|
|
|
uint32_t aFeatures,
|
2017-04-21 06:27:55 +03:00
|
|
|
const char** aFilters, uint32_t aFilterCount);
|
2014-05-02 06:05:49 +04:00
|
|
|
|
2017-06-07 05:36:26 +03:00
|
|
|
// This basically duplicates AutoProfilerLabel's constructor.
|
|
|
|
PseudoStack*
|
|
|
|
MozGlueLabelEnter(const char* aLabel, const char* aDynamicString, void* aSp,
|
|
|
|
uint32_t aLine)
|
|
|
|
{
|
2017-06-21 23:26:16 +03:00
|
|
|
PseudoStack* pseudoStack = AutoProfilerLabel::sPseudoStack.get();
|
2017-06-07 05:36:26 +03:00
|
|
|
if (pseudoStack) {
|
|
|
|
pseudoStack->pushCppFrame(aLabel, aDynamicString, aSp, aLine,
|
|
|
|
js::ProfileEntry::Kind::CPP_NORMAL,
|
|
|
|
js::ProfileEntry::Category::OTHER);
|
|
|
|
}
|
|
|
|
return pseudoStack;
|
|
|
|
}
|
|
|
|
|
|
|
|
// This basically duplicates AutoProfilerLabel's destructor.
|
|
|
|
void
|
|
|
|
MozGlueLabelExit(PseudoStack* aPseudoStack)
|
|
|
|
{
|
|
|
|
if (aPseudoStack) {
|
|
|
|
aPseudoStack->pop();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-24 22:40:42 +03:00
|
|
|
static nsTArray<const char*>
|
|
|
|
SplitAtCommas(const char* aString, UniquePtr<char[]>& aStorage)
|
|
|
|
{
|
|
|
|
size_t len = strlen(aString);
|
|
|
|
aStorage = MakeUnique<char[]>(len + 1);
|
|
|
|
PodCopy(aStorage.get(), aString, len + 1);
|
|
|
|
|
|
|
|
// Iterate over all characters in aStorage and split at commas, by
|
|
|
|
// overwriting commas with the null char.
|
|
|
|
nsTArray<const char*> array;
|
|
|
|
size_t currentElementStart = 0;
|
|
|
|
for (size_t i = 0; i <= len; i++) {
|
|
|
|
if (aStorage[i] == ',') {
|
|
|
|
aStorage[i] = '\0';
|
|
|
|
}
|
|
|
|
if (aStorage[i] == '\0') {
|
|
|
|
array.AppendElement(&aStorage[currentElementStart]);
|
|
|
|
currentElementStart = i + 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return array;
|
|
|
|
}
|
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
void
|
|
|
|
profiler_init(void* aStackTop)
|
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_init");
|
2015-09-18 00:17:26 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!CorePS::Exists());
|
2013-03-26 01:57:28 +04:00
|
|
|
|
2017-07-24 22:40:42 +03:00
|
|
|
if (getenv("MOZ_PROFILER_HELP")) {
|
|
|
|
PrintUsageThenExit(0); // terminates execution
|
|
|
|
}
|
|
|
|
|
2017-07-10 22:29:35 +03:00
|
|
|
SharedLibraryInfo::Initialize();
|
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
uint32_t features =
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-05-01 07:23:34 +03:00
|
|
|
ProfilerFeature::Java |
|
2013-03-26 01:57:28 +04:00
|
|
|
#endif
|
2017-05-01 07:23:34 +03:00
|
|
|
ProfilerFeature::JS |
|
|
|
|
ProfilerFeature::Leaf |
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
#if defined(HAVE_NATIVE_UNWIND)
|
2017-05-01 07:23:34 +03:00
|
|
|
ProfilerFeature::StackWalk |
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
#endif
|
2017-05-01 07:23:34 +03:00
|
|
|
ProfilerFeature::Threads |
|
|
|
|
0;
|
2015-01-15 03:05:25 +03:00
|
|
|
|
2017-07-24 22:40:42 +03:00
|
|
|
UniquePtr<char[]> filterStorage;
|
2015-01-15 03:05:25 +03:00
|
|
|
|
2017-07-24 22:40:42 +03:00
|
|
|
nsTArray<const char*> filters;
|
|
|
|
filters.AppendElement("GeckoMain");
|
|
|
|
filters.AppendElement("Compositor");
|
|
|
|
filters.AppendElement("DOM Worker");
|
2017-03-23 05:44:15 +03:00
|
|
|
|
2017-06-21 13:32:51 +03:00
|
|
|
int entries = PROFILER_DEFAULT_ENTRIES;
|
|
|
|
double interval = PROFILER_DEFAULT_INTERVAL;
|
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
{
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2017-03-14 02:03:33 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// We've passed the possible failure point. Instantiate CorePS, which
|
2017-03-14 02:03:33 +03:00
|
|
|
// indicates that the profiler has initialized successfully.
|
2017-04-21 06:28:23 +03:00
|
|
|
CorePS::Create(lock);
|
2017-03-14 02:03:33 +03:00
|
|
|
|
2017-03-07 08:54:56 +03:00
|
|
|
locked_register_thread(lock, kMainThreadName, aStackTop);
|
2017-03-14 02:03:33 +03:00
|
|
|
|
|
|
|
// Platform-specific initialization.
|
|
|
|
PlatformInit(lock);
|
|
|
|
|
|
|
|
#ifdef MOZ_TASK_TRACER
|
2017-06-02 02:41:48 +03:00
|
|
|
tasktracer::InitTaskTracer();
|
2017-03-14 02:03:33 +03:00
|
|
|
#endif
|
|
|
|
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-06-02 02:41:48 +03:00
|
|
|
if (jni::IsFennec()) {
|
2017-03-14 02:03:33 +03:00
|
|
|
GeckoJavaSampler::Init();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-06-07 05:36:26 +03:00
|
|
|
// Setup support for pushing/popping labels in mozglue.
|
|
|
|
RegisterProfilerLabelEnterExit(MozGlueLabelEnter, MozGlueLabelExit);
|
|
|
|
|
2017-07-13 02:35:14 +03:00
|
|
|
// (Linux-only) We could create CorePS::mLul and read unwind info into it
|
2017-04-21 06:28:23 +03:00
|
|
|
// at this point. That would match the lifetime implied by destruction of
|
|
|
|
// it in profiler_shutdown() just below. However, that gives a big delay on
|
2017-03-14 02:03:33 +03:00
|
|
|
// startup, even if no profiling is actually to be done. So, instead, it is
|
|
|
|
// created on demand at the first call to PlatformStart().
|
|
|
|
|
2017-07-21 03:52:20 +03:00
|
|
|
const char* startupEnv = getenv("MOZ_PROFILER_STARTUP");
|
|
|
|
if (!startupEnv || startupEnv[0] == '\0') {
|
2017-03-14 02:03:33 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-03-23 05:44:15 +03:00
|
|
|
LOG("- MOZ_PROFILER_STARTUP is set");
|
2017-03-15 02:56:50 +03:00
|
|
|
|
2017-03-23 05:44:15 +03:00
|
|
|
const char* startupEntries = getenv("MOZ_PROFILER_STARTUP_ENTRIES");
|
2017-07-21 03:52:20 +03:00
|
|
|
if (startupEntries && startupEntries[0] != '\0') {
|
2017-03-30 09:49:27 +03:00
|
|
|
errno = 0;
|
|
|
|
entries = strtol(startupEntries, nullptr, 10);
|
|
|
|
if (errno == 0 && entries > 0) {
|
|
|
|
LOG("- MOZ_PROFILER_STARTUP_ENTRIES = %d", entries);
|
|
|
|
} else {
|
2017-08-31 14:34:51 +03:00
|
|
|
LOG("- MOZ_PROFILER_STARTUP_ENTRIES not a valid integer: %s",
|
|
|
|
startupEntries);
|
2017-03-23 05:44:15 +03:00
|
|
|
PrintUsageThenExit(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
const char* startupInterval = getenv("MOZ_PROFILER_STARTUP_INTERVAL");
|
2017-07-21 03:52:20 +03:00
|
|
|
if (startupInterval && startupInterval[0] != '\0') {
|
2017-03-30 09:49:27 +03:00
|
|
|
errno = 0;
|
2017-05-16 00:19:12 +03:00
|
|
|
interval = PR_strtod(startupInterval, nullptr);
|
|
|
|
if (errno == 0 && interval > 0.0 && interval <= 1000.0) {
|
|
|
|
LOG("- MOZ_PROFILER_STARTUP_INTERVAL = %f", interval);
|
2017-03-30 09:49:27 +03:00
|
|
|
} else {
|
2017-08-31 14:34:51 +03:00
|
|
|
LOG("- MOZ_PROFILER_STARTUP_INTERVAL not a valid float: %s",
|
|
|
|
startupInterval);
|
2017-03-23 05:44:15 +03:00
|
|
|
PrintUsageThenExit(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-24 22:52:04 +03:00
|
|
|
const char* startupFeaturesBitfield =
|
|
|
|
getenv("MOZ_PROFILER_STARTUP_FEATURES_BITFIELD");
|
2017-07-21 03:52:20 +03:00
|
|
|
if (startupFeaturesBitfield && startupFeaturesBitfield[0] != '\0') {
|
2017-07-24 22:52:04 +03:00
|
|
|
errno = 0;
|
|
|
|
features = strtol(startupFeaturesBitfield, nullptr, 10);
|
|
|
|
if (errno == 0 && features != 0) {
|
|
|
|
LOG("- MOZ_PROFILER_STARTUP_FEATURES_BITFIELD = %d", features);
|
|
|
|
} else {
|
2017-08-31 14:34:51 +03:00
|
|
|
LOG("- MOZ_PROFILER_STARTUP_FEATURES_BITFIELD not a valid integer: %s",
|
|
|
|
startupFeaturesBitfield);
|
2017-07-24 22:52:04 +03:00
|
|
|
PrintUsageThenExit(1);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
const char* startupFeatures = getenv("MOZ_PROFILER_STARTUP_FEATURES");
|
2017-07-21 03:52:20 +03:00
|
|
|
if (startupFeatures && startupFeatures[0] != '\0') {
|
2017-07-24 22:52:04 +03:00
|
|
|
// Interpret startupFeatures as a list of feature strings, separated by
|
|
|
|
// commas.
|
|
|
|
UniquePtr<char[]> featureStringStorage;
|
|
|
|
nsTArray<const char*> featureStringArray =
|
|
|
|
SplitAtCommas(startupFeatures, featureStringStorage);
|
|
|
|
features = ParseFeaturesFromStringArray(featureStringArray.Elements(),
|
|
|
|
featureStringArray.Length());
|
|
|
|
LOG("- MOZ_PROFILER_STARTUP_FEATURES = %d", features);
|
|
|
|
}
|
2017-07-21 03:43:19 +03:00
|
|
|
}
|
|
|
|
|
2017-07-24 22:40:42 +03:00
|
|
|
const char* startupFilters = getenv("MOZ_PROFILER_STARTUP_FILTERS");
|
2017-07-21 03:52:20 +03:00
|
|
|
if (startupFilters && startupFilters[0] != '\0') {
|
2017-07-24 22:40:42 +03:00
|
|
|
filters = SplitAtCommas(startupFilters, filterStorage);
|
|
|
|
LOG("- MOZ_PROFILER_STARTUP_FILTERS = %s", startupFilters);
|
|
|
|
}
|
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
locked_profiler_start(lock, entries, interval, features,
|
2017-07-24 22:40:42 +03:00
|
|
|
filters.Elements(), filters.Length());
|
2017-03-14 02:03:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// We do this with gPSMutex unlocked. The comment in profiler_stop() explains
|
|
|
|
// why.
|
2017-07-24 22:40:42 +03:00
|
|
|
NotifyProfilerStarted(entries, interval, features,
|
|
|
|
filters.Elements(), filters.Length());
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
static void
|
2017-07-31 21:23:13 +03:00
|
|
|
locked_profiler_save_profile_to_file(PSLockRef aLock, const char* aFilename,
|
|
|
|
bool aIsShuttingDown);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
|
|
|
static SamplerThread*
|
2017-04-21 06:27:53 +03:00
|
|
|
locked_profiler_stop(PSLockRef aLock);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
|
|
|
profiler_shutdown()
|
2013-03-26 01:57:28 +04:00
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_shutdown");
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-01-25 08:00:47 +03:00
|
|
|
MOZ_RELEASE_ASSERT(NS_IsMainThread());
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-01-25 08:00:47 +03:00
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
// If the profiler is active we must get a handle to the SamplerThread before
|
2017-04-21 06:28:23 +03:00
|
|
|
// ActivePS is destroyed, in order to delete it.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
SamplerThread* samplerThread = nullptr;
|
|
|
|
{
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2013-04-18 19:34:49 +04:00
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
// Save the profile on shutdown if requested.
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::Exists(lock)) {
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
const char* filename = getenv("MOZ_PROFILER_SHUTDOWN");
|
|
|
|
if (filename) {
|
2017-07-31 21:23:13 +03:00
|
|
|
locked_profiler_save_profile_to_file(lock, filename,
|
|
|
|
/* aIsShuttingDown */ true);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
2013-04-18 19:34:49 +04:00
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
samplerThread = locked_profiler_stop(lock);
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
2013-04-04 02:59:17 +04:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
CorePS::Destroy(lock);
|
2017-04-03 03:40:23 +03:00
|
|
|
|
2017-04-27 00:36:13 +03:00
|
|
|
// We just destroyed CorePS and the ThreadInfos it contains, so we can
|
|
|
|
// clear this thread's TLSInfo.
|
|
|
|
TLSInfo::SetInfo(lock, nullptr);
|
2015-12-19 00:12:47 +03:00
|
|
|
|
|
|
|
#ifdef MOZ_TASK_TRACER
|
2017-06-02 02:41:48 +03:00
|
|
|
tasktracer::ShutdownTaskTracer();
|
2015-12-19 00:12:47 +03:00
|
|
|
#endif
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// We do these operations with gPSMutex unlocked. The comments in
|
|
|
|
// profiler_stop() explain why.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
if (samplerThread) {
|
2017-05-30 22:06:14 +03:00
|
|
|
ProfilerParent::ProfilerStopped();
|
2017-03-14 02:03:33 +03:00
|
|
|
NotifyObservers("profiler-stopped");
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
delete samplerThread;
|
|
|
|
}
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
|
|
|
|
2017-03-15 02:56:50 +03:00
|
|
|
UniquePtr<char[]>
|
2017-07-31 21:23:13 +03:00
|
|
|
profiler_get_profile(double aSinceTime, bool aIsShuttingDown)
|
2013-03-26 01:57:28 +04:00
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_get_profile");
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-03-29 07:48:13 +03:00
|
|
|
SpliceableChunkedJSONWriter b;
|
2017-10-13 07:32:14 +03:00
|
|
|
b.Start();
|
2017-04-09 04:07:52 +03:00
|
|
|
{
|
2017-07-31 21:23:13 +03:00
|
|
|
if (!profiler_stream_json_for_this_process(b, aSinceTime,
|
|
|
|
aIsShuttingDown)) {
|
2017-04-12 23:45:28 +03:00
|
|
|
return nullptr;
|
|
|
|
}
|
2017-01-25 08:00:47 +03:00
|
|
|
|
2017-04-12 23:40:03 +03:00
|
|
|
// Don't include profiles from other processes because this is a
|
|
|
|
// synchronous function.
|
2017-04-09 04:07:52 +03:00
|
|
|
b.StartArrayProperty("processes");
|
|
|
|
b.EndArray();
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
2017-04-09 04:07:52 +03:00
|
|
|
b.End();
|
2013-03-26 01:57:28 +04:00
|
|
|
|
2017-03-29 07:48:13 +03:00
|
|
|
return b.WriteFunc()->CopyData();
|
2016-11-18 21:48:00 +03:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
2017-05-01 07:23:34 +03:00
|
|
|
profiler_get_start_params(int* aEntries, double* aInterval, uint32_t* aFeatures,
|
2017-06-02 02:41:48 +03:00
|
|
|
Vector<const char*>* aFilters)
|
2015-08-11 21:26:09 +03:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-01-25 08:00:47 +03:00
|
|
|
|
2017-02-23 06:26:46 +03:00
|
|
|
if (NS_WARN_IF(!aEntries) || NS_WARN_IF(!aInterval) ||
|
2017-03-29 05:22:29 +03:00
|
|
|
NS_WARN_IF(!aFeatures) || NS_WARN_IF(!aFilters)) {
|
2015-08-11 21:26:09 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2015-08-11 21:26:09 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
|
|
|
*aEntries = 0;
|
|
|
|
*aInterval = 0;
|
2017-05-01 07:23:34 +03:00
|
|
|
*aFeatures = 0;
|
2017-04-21 06:28:23 +03:00
|
|
|
aFilters->clear();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
*aEntries = ActivePS::Entries(lock);
|
|
|
|
*aInterval = ActivePS::Interval(lock);
|
2017-05-01 07:23:34 +03:00
|
|
|
*aFeatures = ActivePS::Features(lock);
|
2017-03-29 05:22:29 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
const Vector<std::string>& filters = ActivePS::Filters(lock);
|
2017-04-21 06:27:55 +03:00
|
|
|
MOZ_ALWAYS_TRUE(aFilters->resize(filters.length()));
|
|
|
|
for (uint32_t i = 0; i < filters.length(); ++i) {
|
|
|
|
(*aFilters)[i] = filters[i].c_str();
|
2017-03-29 05:22:29 +03:00
|
|
|
}
|
2015-08-11 21:26:09 +03:00
|
|
|
}
|
|
|
|
|
2017-07-25 01:16:33 +03:00
|
|
|
AutoSetProfilerEnvVarsForChildProcess::AutoSetProfilerEnvVarsForChildProcess(
|
|
|
|
MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM_IN_IMPL)
|
2017-10-31 13:20:12 +03:00
|
|
|
: mSetEntries()
|
|
|
|
, mSetInterval()
|
|
|
|
, mSetFeaturesBitfield()
|
|
|
|
, mSetFilters()
|
2017-07-25 01:16:33 +03:00
|
|
|
{
|
|
|
|
MOZ_GUARD_OBJECT_NOTIFIER_INIT;
|
|
|
|
|
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
|
|
|
|
|
|
|
PSAutoLock lock(gPSMutex);
|
|
|
|
|
|
|
|
if (!ActivePS::Exists(lock)) {
|
|
|
|
PR_SetEnv("MOZ_PROFILER_STARTUP=");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
PR_SetEnv("MOZ_PROFILER_STARTUP=1");
|
|
|
|
SprintfLiteral(mSetEntries, "MOZ_PROFILER_STARTUP_ENTRIES=%d",
|
|
|
|
ActivePS::Entries(lock));
|
|
|
|
PR_SetEnv(mSetEntries);
|
|
|
|
|
2017-08-31 14:36:44 +03:00
|
|
|
// Use AppendFloat instead of SprintfLiteral with %f because the decimal
|
|
|
|
// separator used by %f is locale-dependent. But the string we produce needs
|
|
|
|
// to be parseable by strtod, which only accepts the period character as a
|
|
|
|
// decimal separator. AppendFloat always uses the period character.
|
|
|
|
nsCString setInterval;
|
|
|
|
setInterval.AppendLiteral("MOZ_PROFILER_STARTUP_INTERVAL=");
|
|
|
|
setInterval.AppendFloat(ActivePS::Interval(lock));
|
|
|
|
strncpy(mSetInterval, setInterval.get(), MOZ_ARRAY_LENGTH(mSetInterval));
|
|
|
|
mSetInterval[MOZ_ARRAY_LENGTH(mSetInterval) - 1] = '\0';
|
2017-07-25 01:16:33 +03:00
|
|
|
PR_SetEnv(mSetInterval);
|
|
|
|
|
|
|
|
SprintfLiteral(mSetFeaturesBitfield,
|
|
|
|
"MOZ_PROFILER_STARTUP_FEATURES_BITFIELD=%d",
|
|
|
|
ActivePS::Features(lock));
|
|
|
|
PR_SetEnv(mSetFeaturesBitfield);
|
|
|
|
|
|
|
|
std::string filtersString;
|
|
|
|
const Vector<std::string>& filters = ActivePS::Filters(lock);
|
|
|
|
for (uint32_t i = 0; i < filters.length(); ++i) {
|
|
|
|
filtersString += filters[i];
|
|
|
|
if (i != filters.length() - 1) {
|
|
|
|
filtersString += ",";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
SprintfLiteral(mSetFilters, "MOZ_PROFILER_STARTUP_FILTERS=%s",
|
|
|
|
filtersString.c_str());
|
|
|
|
PR_SetEnv(mSetFilters);
|
|
|
|
}
|
|
|
|
|
|
|
|
AutoSetProfilerEnvVarsForChildProcess::~AutoSetProfilerEnvVarsForChildProcess()
|
|
|
|
{
|
|
|
|
// Our current process doesn't look at these variables after startup, so we
|
|
|
|
// can just unset all the variables. This allows us to use literal strings,
|
|
|
|
// which will be valid for the whole life time of the program and can be
|
|
|
|
// passed to PR_SetEnv without problems.
|
|
|
|
PR_SetEnv("MOZ_PROFILER_STARTUP=");
|
|
|
|
PR_SetEnv("MOZ_PROFILER_STARTUP_ENTRIES=");
|
|
|
|
PR_SetEnv("MOZ_PROFILER_STARTUP_INTERVAL=");
|
|
|
|
PR_SetEnv("MOZ_PROFILER_STARTUP_FEATURES_BITFIELD=");
|
|
|
|
PR_SetEnv("MOZ_PROFILER_STARTUP_FILTERS=");
|
|
|
|
}
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
static void
|
2017-07-31 21:23:13 +03:00
|
|
|
locked_profiler_save_profile_to_file(PSLockRef aLock, const char* aFilename,
|
|
|
|
bool aIsShuttingDown = false)
|
2014-04-22 00:48:47 +04:00
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("locked_profiler_save_profile_to_file(%s)", aFilename);
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists() && ActivePS::Exists(aLock));
|
2014-04-22 00:48:47 +04:00
|
|
|
|
|
|
|
std::ofstream stream;
|
|
|
|
stream.open(aFilename);
|
|
|
|
if (stream.is_open()) {
|
2017-06-02 02:41:48 +03:00
|
|
|
SpliceableJSONWriter w(MakeUnique<OStreamJSONWriteFunc>(stream));
|
2017-10-13 07:32:14 +03:00
|
|
|
w.Start();
|
2017-04-09 04:07:52 +03:00
|
|
|
{
|
2017-07-31 21:23:13 +03:00
|
|
|
locked_profiler_stream_json_for_this_process(aLock, w, /* sinceTime */ 0,
|
|
|
|
aIsShuttingDown);
|
2017-04-09 04:07:52 +03:00
|
|
|
|
|
|
|
// Don't include profiles from other processes because this is a
|
|
|
|
// synchronous function.
|
|
|
|
w.StartArrayProperty("processes");
|
|
|
|
w.EndArray();
|
|
|
|
}
|
|
|
|
w.End();
|
|
|
|
|
2014-04-22 00:48:47 +04:00
|
|
|
stream.close();
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
profiler_save_profile_to_file(const char* aFilename)
|
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_save_profile_to_file(%s)", aFilename);
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
return;
|
2014-04-22 00:48:47 +04:00
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
|
|
|
locked_profiler_save_profile_to_file(lock, aFilename);
|
2014-04-22 00:48:47 +04:00
|
|
|
}
|
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
uint32_t
|
2017-05-01 05:07:17 +03:00
|
|
|
profiler_get_available_features()
|
2013-03-26 01:57:28 +04:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-02-06 06:31:38 +03:00
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
uint32_t features = 0;
|
|
|
|
|
|
|
|
#define ADD_FEATURE(n_, str_, Name_) ProfilerFeature::Set##Name_(features);
|
|
|
|
|
|
|
|
// Add all the possible features.
|
|
|
|
PROFILER_FOR_EACH_FEATURE(ADD_FEATURE)
|
|
|
|
|
|
|
|
#undef ADD_FEATURE
|
|
|
|
|
|
|
|
// Now remove features not supported on this platform/configuration.
|
2017-05-08 00:09:33 +03:00
|
|
|
#if !defined(GP_OS_android)
|
2017-05-01 07:23:34 +03:00
|
|
|
ProfilerFeature::ClearJava(features);
|
2013-03-26 01:57:28 +04:00
|
|
|
#endif
|
2017-05-01 07:23:34 +03:00
|
|
|
#if !defined(HAVE_NATIVE_UNWIND)
|
|
|
|
ProfilerFeature::ClearStackWalk(features);
|
|
|
|
#endif
|
|
|
|
#if !defined(MOZ_TASK_TRACER)
|
|
|
|
ProfilerFeature::ClearTaskTracer(features);
|
2013-10-08 18:05:25 +04:00
|
|
|
#endif
|
2013-03-26 01:57:28 +04:00
|
|
|
|
|
|
|
return features;
|
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
2017-02-23 06:26:46 +03:00
|
|
|
profiler_get_buffer_info_helper(uint32_t* aCurrentPosition,
|
|
|
|
uint32_t* aEntries,
|
|
|
|
uint32_t* aGeneration)
|
2015-04-22 22:36:43 +03:00
|
|
|
{
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
// This function is called by profiler_get_buffer_info(), which has already
|
|
|
|
// zeroed the outparams.
|
2015-04-22 22:36:43 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-01-25 08:00:47 +03:00
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2015-04-22 22:36:43 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
2015-04-22 22:36:43 +03:00
|
|
|
return;
|
2017-01-25 08:00:47 +03:00
|
|
|
}
|
2015-04-22 22:36:43 +03:00
|
|
|
|
2017-07-13 04:05:34 +03:00
|
|
|
*aCurrentPosition = ActivePS::Buffer(lock).mWritePos;
|
2017-04-21 06:28:23 +03:00
|
|
|
*aEntries = ActivePS::Entries(lock);
|
2017-07-13 04:05:34 +03:00
|
|
|
*aGeneration = ActivePS::Buffer(lock).mGeneration;
|
2017-02-08 04:01:41 +03:00
|
|
|
}
|
|
|
|
|
2017-08-29 20:13:57 +03:00
|
|
|
static void
|
|
|
|
PollJSSamplingForCurrentThread()
|
|
|
|
{
|
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
|
|
|
|
|
|
|
PSAutoLock lock(gPSMutex);
|
|
|
|
|
|
|
|
ThreadInfo* info = TLSInfo::Info(lock);
|
|
|
|
if (!info) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
info->PollJSSampling();
|
|
|
|
}
|
|
|
|
|
|
|
|
// When the profiler is started on a background thread, we can't synchronously
|
|
|
|
// call PollJSSampling on the main thread's ThreadInfo. And the next regular
|
|
|
|
// call to PollJSSampling on the main thread would only happen once the main
|
|
|
|
// thread triggers a JS interrupt callback.
|
|
|
|
// This means that all the JS execution between profiler_start() and the first
|
|
|
|
// JS interrupt would happen with JS sampling disabled, and we wouldn't get any
|
|
|
|
// JS function information for that period of time.
|
|
|
|
// So in order to start JS sampling as soon as possible, we dispatch a runnable
|
|
|
|
// to the main thread which manually calls PollJSSamplingForCurrentThread().
|
|
|
|
// In some cases this runnable will lose the race with the next JS interrupt.
|
|
|
|
// That's fine; PollJSSamplingForCurrentThread() is immune to redundant calls.
|
|
|
|
static void
|
|
|
|
TriggerPollJSSamplingOnMainThread()
|
|
|
|
{
|
|
|
|
nsCOMPtr<nsIThread> mainThread;
|
|
|
|
nsresult rv = NS_GetMainThread(getter_AddRefs(mainThread));
|
|
|
|
if (NS_SUCCEEDED(rv) && mainThread) {
|
|
|
|
nsCOMPtr<nsIRunnable> task =
|
|
|
|
NS_NewRunnableFunction("TriggerPollJSSamplingOnMainThread", []() {
|
|
|
|
PollJSSamplingForCurrentThread();
|
|
|
|
});
|
|
|
|
SystemGroup::Dispatch(TaskCategory::Other, task.forget());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
static void
|
2017-04-21 06:27:53 +03:00
|
|
|
locked_profiler_start(PSLockRef aLock, int aEntries, double aInterval,
|
2017-05-01 07:23:34 +03:00
|
|
|
uint32_t aFeatures,
|
2017-04-21 06:27:55 +03:00
|
|
|
const char** aFilters, uint32_t aFilterCount)
|
2013-03-26 01:57:28 +04:00
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
if (LOG_TEST) {
|
|
|
|
LOG("locked_profiler_start");
|
|
|
|
LOG("- entries = %d", aEntries);
|
|
|
|
LOG("- interval = %.2f", aInterval);
|
2017-05-01 07:23:34 +03:00
|
|
|
|
|
|
|
#define LOG_FEATURE(n_, str_, Name_) \
|
|
|
|
if (ProfilerFeature::Has##Name_(aFeatures)) { \
|
|
|
|
LOG("- feature = %s", str_); \
|
|
|
|
}
|
|
|
|
|
|
|
|
PROFILER_FOR_EACH_FEATURE(LOG_FEATURE)
|
|
|
|
|
|
|
|
#undef LOG_FEATURE
|
|
|
|
|
2017-03-15 02:56:50 +03:00
|
|
|
for (uint32_t i = 0; i < aFilterCount; i++) {
|
2017-04-21 06:27:55 +03:00
|
|
|
LOG("- threads = %s", aFilters[i]);
|
2017-03-15 02:56:50 +03:00
|
|
|
}
|
|
|
|
}
|
2013-12-18 16:02:34 +04:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists() && !ActivePS::Exists(aLock));
|
2013-08-05 20:03:22 +04:00
|
|
|
|
2017-07-06 01:45:31 +03:00
|
|
|
#if defined(GP_PLAT_amd64_windows)
|
|
|
|
InitializeWin64ProfilerHooks();
|
|
|
|
#endif
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// Fall back to the default values if the passed-in values are unreasonable.
|
2017-04-28 08:57:03 +03:00
|
|
|
int entries = aEntries > 0 ? aEntries : PROFILER_DEFAULT_ENTRIES;
|
|
|
|
double interval = aInterval > 0 ? aInterval : PROFILER_DEFAULT_INTERVAL;
|
2017-02-18 02:16:47 +03:00
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
ActivePS::Create(aLock, entries, interval, aFeatures, aFilters, aFilterCount);
|
2017-02-09 09:09:39 +03:00
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
// Set up profiling for each registered thread, if appropriate.
|
2017-10-04 07:04:24 +03:00
|
|
|
int tid = Thread::GetCurrentId();
|
2017-04-21 06:28:23 +03:00
|
|
|
const CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(aLock);
|
2017-04-03 03:40:23 +03:00
|
|
|
for (uint32_t i = 0; i < liveThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = liveThreads.at(i);
|
2017-02-09 09:09:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::ShouldProfileThread(aLock, info)) {
|
2017-04-18 05:46:54 +03:00
|
|
|
info->StartProfiling();
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureJS(aLock)) {
|
2017-04-27 00:36:17 +03:00
|
|
|
info->StartJSSampling();
|
2017-04-24 02:49:28 +03:00
|
|
|
if (info->ThreadId() == tid) {
|
|
|
|
// We can manually poll the current thread so it starts sampling
|
|
|
|
// immediately.
|
2017-04-27 00:36:17 +03:00
|
|
|
info->PollJSSampling();
|
2017-08-29 20:13:57 +03:00
|
|
|
} else if (info->IsMainThread()) {
|
|
|
|
// Dispatch a runnable to the main thread to call PollJSSampling(),
|
|
|
|
// so that we don't have wait for the next JS interrupt callback in
|
|
|
|
// order to start profiling JS.
|
|
|
|
TriggerPollJSSamplingOnMainThread();
|
2017-04-24 02:49:28 +03:00
|
|
|
}
|
Bug 1345262 (part 5) - Fix how JS sampling is started/stopped by the profiler. r=mstange,djvj.
Currently, JS sampling has major problems.
- JS sampling is enabled for all JS threads from the thread that runs
locked_profiler_start() -- currently only the main thread -- but the JS
engine can't handle enabling from off-thread, and asserts. This makes
profiling workers impossible in a debug build.
- No JS thread will be JS sampled unless enableJSSampling() is called, but that
only happens in locked_profiler_start(). That means any worker threads
created while the profiler is active won't be JS sampled.
- Only the thread that runs locked_profiler_stop() -- currently only the main
thread -- ever calls disableJSSampling(). This means that worker threads that
start being JS sampled never stop being JS sampled.
This patch fixes these three problems in the following ways.
- locked_profiler_start() now sets a flag in PseudoStack that indicates
JS sampling is desired, but doesn't directly enable it. Instead, the JS
thread polls that flag and enables JS sampling itself when it sees the flag
is set. The polling is done by the interrupt callback. There was already a
flag of this sort (mJSSampling) but the new one is better.
This required adding a call to profiler_js_operation_callback() to the
InterruptCallback() in XPCJSContext.cpp. (In comparison, the
InterruptCallback() in dom/workers/RuntimeService.cpp already had such a
call.)
- RegisterCurrentThread() now requests JS sampling of a JS thread when the
profiler is active, the thread is being profiled, and JS sampling is enabled.
- locked_profiler_stop() now calls stopJSSampling() on all live threads.
The patch makes the following smaller changes as well.
- Renames profiler_js_operation_callback() as profiler_js_interrupt_callback(),
because "interrupt callback" is the standard name (viz.
JS_AddInterruptCallback()).
- Calls js::RegisterContextProfilingEventMarker() with nullptr when stopping
JS sampling, so that ProfilerJSEventMarker won't fire unnecessarily.
- Some minor formatting changes.
--HG--
extra : rebase_source : 372f94c963a9e5b2493389892499b1ca205ebc2f
2017-03-10 01:04:23 +03:00
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
Bug 1345262 (part 5) - Fix how JS sampling is started/stopped by the profiler. r=mstange,djvj.
Currently, JS sampling has major problems.
- JS sampling is enabled for all JS threads from the thread that runs
locked_profiler_start() -- currently only the main thread -- but the JS
engine can't handle enabling from off-thread, and asserts. This makes
profiling workers impossible in a debug build.
- No JS thread will be JS sampled unless enableJSSampling() is called, but that
only happens in locked_profiler_start(). That means any worker threads
created while the profiler is active won't be JS sampled.
- Only the thread that runs locked_profiler_stop() -- currently only the main
thread -- ever calls disableJSSampling(). This means that worker threads that
start being JS sampled never stop being JS sampled.
This patch fixes these three problems in the following ways.
- locked_profiler_start() now sets a flag in PseudoStack that indicates
JS sampling is desired, but doesn't directly enable it. Instead, the JS
thread polls that flag and enables JS sampling itself when it sees the flag
is set. The polling is done by the interrupt callback. There was already a
flag of this sort (mJSSampling) but the new one is better.
This required adding a call to profiler_js_operation_callback() to the
InterruptCallback() in XPCJSContext.cpp. (In comparison, the
InterruptCallback() in dom/workers/RuntimeService.cpp already had such a
call.)
- RegisterCurrentThread() now requests JS sampling of a JS thread when the
profiler is active, the thread is being profiled, and JS sampling is enabled.
- locked_profiler_stop() now calls stopJSSampling() on all live threads.
The patch makes the following smaller changes as well.
- Renames profiler_js_operation_callback() as profiler_js_interrupt_callback(),
because "interrupt callback" is the standard name (viz.
JS_AddInterruptCallback()).
- Calls js::RegisterContextProfilingEventMarker() with nullptr when stopping
JS sampling, so that ProfilerJSEventMarker won't fire unnecessarily.
- Some minor formatting changes.
--HG--
extra : rebase_source : 372f94c963a9e5b2493389892499b1ca205ebc2f
2017-03-10 01:04:23 +03:00
|
|
|
}
|
|
|
|
|
2017-04-03 03:40:23 +03:00
|
|
|
// Dead ThreadInfos are deleted in profiler_stop(), and dead ThreadInfos
|
2017-04-21 06:28:23 +03:00
|
|
|
// aren't saved when the profiler is inactive. Therefore the dead threads
|
|
|
|
// vector should be empty here.
|
|
|
|
MOZ_RELEASE_ASSERT(CorePS::DeadThreads(aLock).empty());
|
2017-04-03 03:40:23 +03:00
|
|
|
|
2017-02-09 09:09:39 +03:00
|
|
|
#ifdef MOZ_TASK_TRACER
|
2017-05-05 00:26:54 +03:00
|
|
|
if (ActivePS::FeatureTaskTracer(aLock)) {
|
2017-06-02 02:41:48 +03:00
|
|
|
tasktracer::StartLogging();
|
2017-02-09 09:09:39 +03:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-05-08 00:09:33 +03:00
|
|
|
#if defined(GP_OS_android)
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureJava(aLock)) {
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
int javaInterval = interval;
|
2017-04-21 06:28:23 +03:00
|
|
|
// Java sampling doesn't accurately keep up with 1ms sampling.
|
2013-04-23 21:10:29 +04:00
|
|
|
if (javaInterval < 10) {
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
javaInterval = 10;
|
2013-04-23 21:10:29 +04:00
|
|
|
}
|
2017-06-02 02:41:48 +03:00
|
|
|
java::GeckoJavaSampler::Start(javaInterval, 1000);
|
2013-04-23 21:10:29 +04:00
|
|
|
}
|
|
|
|
#endif
|
2017-06-01 06:33:22 +03:00
|
|
|
|
|
|
|
// At the very end, set up RacyFeatures.
|
|
|
|
RacyFeatures::SetActive(ActivePS::Features(aLock));
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
2017-05-01 07:23:34 +03:00
|
|
|
profiler_start(int aEntries, double aInterval, uint32_t aFeatures,
|
2017-04-21 06:27:55 +03:00
|
|
|
const char** aFilters, uint32_t aFilterCount)
|
2013-03-26 01:57:28 +04:00
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_start");
|
2017-01-25 08:00:47 +03:00
|
|
|
|
2013-12-18 16:02:34 +04:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
SamplerThread* samplerThread = nullptr;
|
|
|
|
{
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2013-03-26 01:57:28 +04:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// Initialize if necessary.
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!CorePS::Exists()) {
|
2017-03-14 02:03:33 +03:00
|
|
|
profiler_init(nullptr);
|
|
|
|
}
|
2013-03-26 01:57:28 +04:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// Reset the current state if the profiler is running.
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::Exists(lock)) {
|
2017-03-14 02:03:33 +03:00
|
|
|
samplerThread = locked_profiler_stop(lock);
|
|
|
|
}
|
|
|
|
|
2017-05-01 07:23:34 +03:00
|
|
|
locked_profiler_start(lock, aEntries, aInterval, aFeatures,
|
2017-04-21 06:27:55 +03:00
|
|
|
aFilters, aFilterCount);
|
2017-02-07 07:22:27 +03:00
|
|
|
}
|
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// We do these operations with gPSMutex unlocked. The comments in
|
|
|
|
// profiler_stop() explain why.
|
2017-03-14 02:03:33 +03:00
|
|
|
if (samplerThread) {
|
2017-05-30 22:06:14 +03:00
|
|
|
ProfilerParent::ProfilerStopped();
|
2017-03-14 02:03:33 +03:00
|
|
|
NotifyObservers("profiler-stopped");
|
2017-03-14 02:03:33 +03:00
|
|
|
delete samplerThread;
|
|
|
|
}
|
2017-05-01 07:23:34 +03:00
|
|
|
NotifyProfilerStarted(aEntries, aInterval, aFeatures,
|
2017-04-21 06:27:55 +03:00
|
|
|
aFilters, aFilterCount);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
2017-02-08 04:01:41 +03:00
|
|
|
|
2017-07-24 23:48:15 +03:00
|
|
|
void
|
|
|
|
profiler_ensure_started(int aEntries, double aInterval, uint32_t aFeatures,
|
|
|
|
const char** aFilters, uint32_t aFilterCount)
|
|
|
|
{
|
|
|
|
LOG("profiler_ensure_started");
|
|
|
|
|
|
|
|
bool startedProfiler = false;
|
|
|
|
SamplerThread* samplerThread = nullptr;
|
|
|
|
{
|
|
|
|
PSAutoLock lock(gPSMutex);
|
|
|
|
|
|
|
|
// Initialize if necessary.
|
|
|
|
if (!CorePS::Exists()) {
|
|
|
|
profiler_init(nullptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ActivePS::Exists(lock)) {
|
|
|
|
// The profiler is active.
|
|
|
|
if (!ActivePS::Equals(lock, aEntries, aInterval, aFeatures,
|
|
|
|
aFilters, aFilterCount)) {
|
|
|
|
// Stop and restart with different settings.
|
|
|
|
samplerThread = locked_profiler_stop(lock);
|
|
|
|
locked_profiler_start(lock, aEntries, aInterval, aFeatures,
|
|
|
|
aFilters, aFilterCount);
|
|
|
|
startedProfiler = true;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// The profiler is stopped.
|
|
|
|
locked_profiler_start(lock, aEntries, aInterval, aFeatures,
|
|
|
|
aFilters, aFilterCount);
|
|
|
|
startedProfiler = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// We do these operations with gPSMutex unlocked. The comments in
|
|
|
|
// profiler_stop() explain why.
|
|
|
|
if (samplerThread) {
|
|
|
|
ProfilerParent::ProfilerStopped();
|
|
|
|
NotifyObservers("profiler-stopped");
|
|
|
|
delete samplerThread;
|
|
|
|
}
|
|
|
|
if (startedProfiler) {
|
|
|
|
NotifyProfilerStarted(aEntries, aInterval, aFeatures,
|
|
|
|
aFilters, aFilterCount);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
static MOZ_MUST_USE SamplerThread*
|
2017-04-21 06:27:53 +03:00
|
|
|
locked_profiler_stop(PSLockRef aLock)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("locked_profiler_stop");
|
2017-02-08 04:01:41 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists() && ActivePS::Exists(aLock));
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-01 06:33:22 +03:00
|
|
|
// At the very start, clear RacyFeatures.
|
|
|
|
RacyFeatures::SetInactive();
|
|
|
|
|
2017-02-09 09:09:39 +03:00
|
|
|
#ifdef MOZ_TASK_TRACER
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureTaskTracer(aLock)) {
|
2017-06-02 02:41:48 +03:00
|
|
|
tasktracer::StopLogging();
|
2017-02-09 09:09:39 +03:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-04-18 05:46:54 +03:00
|
|
|
// Stop sampling live threads.
|
2017-10-04 07:04:24 +03:00
|
|
|
int tid = Thread::GetCurrentId();
|
2017-04-21 06:28:23 +03:00
|
|
|
CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(aLock);
|
2017-04-18 05:46:54 +03:00
|
|
|
for (uint32_t i = 0; i < liveThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = liveThreads.at(i);
|
|
|
|
if (info->IsBeingProfiled()) {
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::FeatureJS(aLock)) {
|
2017-04-27 00:36:17 +03:00
|
|
|
info->StopJSSampling();
|
2017-04-24 02:49:28 +03:00
|
|
|
if (info->ThreadId() == tid) {
|
|
|
|
// We can manually poll the current thread so it stops profiling
|
|
|
|
// immediately.
|
2017-04-27 00:36:17 +03:00
|
|
|
info->PollJSSampling();
|
2017-04-24 02:49:28 +03:00
|
|
|
}
|
2017-04-03 03:40:23 +03:00
|
|
|
}
|
2017-04-18 05:46:54 +03:00
|
|
|
info->StopProfiling();
|
Bug 1345262 (part 5) - Fix how JS sampling is started/stopped by the profiler. r=mstange,djvj.
Currently, JS sampling has major problems.
- JS sampling is enabled for all JS threads from the thread that runs
locked_profiler_start() -- currently only the main thread -- but the JS
engine can't handle enabling from off-thread, and asserts. This makes
profiling workers impossible in a debug build.
- No JS thread will be JS sampled unless enableJSSampling() is called, but that
only happens in locked_profiler_start(). That means any worker threads
created while the profiler is active won't be JS sampled.
- Only the thread that runs locked_profiler_stop() -- currently only the main
thread -- ever calls disableJSSampling(). This means that worker threads that
start being JS sampled never stop being JS sampled.
This patch fixes these three problems in the following ways.
- locked_profiler_start() now sets a flag in PseudoStack that indicates
JS sampling is desired, but doesn't directly enable it. Instead, the JS
thread polls that flag and enables JS sampling itself when it sees the flag
is set. The polling is done by the interrupt callback. There was already a
flag of this sort (mJSSampling) but the new one is better.
This required adding a call to profiler_js_operation_callback() to the
InterruptCallback() in XPCJSContext.cpp. (In comparison, the
InterruptCallback() in dom/workers/RuntimeService.cpp already had such a
call.)
- RegisterCurrentThread() now requests JS sampling of a JS thread when the
profiler is active, the thread is being profiled, and JS sampling is enabled.
- locked_profiler_stop() now calls stopJSSampling() on all live threads.
The patch makes the following smaller changes as well.
- Renames profiler_js_operation_callback() as profiler_js_interrupt_callback(),
because "interrupt callback" is the standard name (viz.
JS_AddInterruptCallback()).
- Calls js::RegisterContextProfilingEventMarker() with nullptr when stopping
JS sampling, so that ProfilerJSEventMarker won't fire unnecessarily.
- Some minor formatting changes.
--HG--
extra : rebase_source : 372f94c963a9e5b2493389892499b1ca205ebc2f
2017-03-10 01:04:23 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-03 03:40:23 +03:00
|
|
|
// This is where we destroy the ThreadInfos for all dead threads.
|
2017-04-21 06:28:23 +03:00
|
|
|
CorePS::ThreadVector& deadThreads = CorePS::DeadThreads(aLock);
|
2017-08-03 13:08:04 +03:00
|
|
|
while (!deadThreads.empty()) {
|
2017-04-03 03:40:23 +03:00
|
|
|
delete deadThreads.back();
|
|
|
|
deadThreads.pop_back();
|
|
|
|
}
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
// The Stop() call doesn't actually stop Run(); that happens in this
|
|
|
|
// function's caller when the sampler thread is destroyed. Stop() just gives
|
|
|
|
// the SamplerThread a chance to do some cleanup with gPSMutex locked.
|
|
|
|
SamplerThread* samplerThread = ActivePS::Destroy(aLock);
|
|
|
|
samplerThread->Stop(aLock);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
|
|
|
return samplerThread;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
profiler_stop()
|
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_stop");
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
|
|
|
SamplerThread* samplerThread;
|
|
|
|
{
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
samplerThread = locked_profiler_stop(lock);
|
2014-04-29 06:20:51 +04:00
|
|
|
}
|
2013-12-18 16:02:34 +04:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// We notify observers with gPSMutex unlocked. Otherwise we might get a
|
2017-05-31 00:07:56 +03:00
|
|
|
// deadlock, if code run by these functions calls a profiler function that
|
|
|
|
// locks gPSMutex, for example when it wants to insert a marker.
|
|
|
|
// (This has been seen in practise in bug 1346356, when we were still firing
|
|
|
|
// these notifications synchronously.)
|
2017-05-30 22:06:14 +03:00
|
|
|
ProfilerParent::ProfilerStopped();
|
2017-03-14 02:03:33 +03:00
|
|
|
NotifyObservers("profiler-stopped");
|
2017-03-14 02:03:33 +03:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// We delete with gPSMutex unlocked. Otherwise we would get a deadlock: we
|
|
|
|
// would be waiting here with gPSMutex locked for SamplerThread::Run() to
|
|
|
|
// return so the join operation within the destructor can complete, but Run()
|
|
|
|
// needs to lock gPSMutex to return.
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
//
|
|
|
|
// Because this call occurs with gPSMutex unlocked, it -- including the final
|
|
|
|
// iteration of Run()'s loop -- must be able detect deactivation and return
|
|
|
|
// in a way that's safe with respect to other gPSMutex-locking operations
|
|
|
|
// that may have occurred in the meantime.
|
|
|
|
delete samplerThread;
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
bool
|
|
|
|
profiler_is_paused()
|
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-01-27 08:25:23 +03:00
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
2014-03-01 00:16:38 +04:00
|
|
|
return false;
|
|
|
|
}
|
2017-01-27 08:25:23 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
return ActivePS::IsPaused(lock);
|
2014-03-01 00:16:38 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
|
|
|
profiler_pause()
|
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_pause");
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
{
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2017-01-27 08:25:23 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
2017-03-14 02:03:33 +03:00
|
|
|
return;
|
|
|
|
}
|
2017-01-27 08:25:23 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
ActivePS::SetIsPaused(lock, true);
|
2017-07-31 21:44:35 +03:00
|
|
|
ActivePS::Buffer(lock).AddEntry(ProfileBufferEntry::Pause(profiler_time()));
|
2017-03-14 02:03:33 +03:00
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// gPSMutex must be unlocked when we notify, to avoid potential deadlocks.
|
2017-05-30 22:06:14 +03:00
|
|
|
ProfilerParent::ProfilerPaused();
|
2017-03-14 02:03:33 +03:00
|
|
|
NotifyObservers("profiler-paused");
|
2014-03-01 00:16:38 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
|
|
|
profiler_resume()
|
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
LOG("profiler_resume");
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-01-27 08:25:23 +03:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
{
|
2017-05-19 09:30:06 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock)) {
|
2017-03-14 02:03:33 +03:00
|
|
|
return;
|
|
|
|
}
|
2017-01-27 08:25:23 +03:00
|
|
|
|
2017-07-31 21:44:35 +03:00
|
|
|
ActivePS::Buffer(lock).AddEntry(ProfileBufferEntry::Resume(profiler_time()));
|
2017-04-21 06:28:23 +03:00
|
|
|
ActivePS::SetIsPaused(lock, false);
|
2017-03-14 02:03:33 +03:00
|
|
|
}
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-03-14 02:03:33 +03:00
|
|
|
// gPSMutex must be unlocked when we notify, to avoid potential deadlocks.
|
2017-05-30 22:06:14 +03:00
|
|
|
ProfilerParent::ProfilerResumed();
|
2017-03-14 02:03:33 +03:00
|
|
|
NotifyObservers("profiler-resumed");
|
2014-03-01 00:16:38 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
bool
|
2017-05-01 07:23:34 +03:00
|
|
|
profiler_feature_active(uint32_t aFeature)
|
2014-05-24 20:14:14 +04:00
|
|
|
{
|
2017-02-06 06:31:38 +03:00
|
|
|
// This function runs both on and off the main thread.
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-01 06:33:22 +03:00
|
|
|
// This function is hot enough that we use RacyFeatures, not ActivePS.
|
|
|
|
return RacyFeatures::IsActiveWithFeature(aFeature);
|
2014-05-24 20:14:14 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
bool
|
|
|
|
profiler_is_active()
|
2013-03-26 01:57:28 +04:00
|
|
|
{
|
2017-02-06 06:31:38 +03:00
|
|
|
// This function runs both on and off the main thread.
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-01 06:33:22 +03:00
|
|
|
// This function is hot enough that we use RacyFeatures, notActivePS.
|
|
|
|
return RacyFeatures::IsActive();
|
2013-03-26 01:57:28 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
|
|
|
profiler_register_thread(const char* aName, void* aGuessStackTop)
|
2013-03-29 23:34:49 +04:00
|
|
|
{
|
2017-03-15 02:56:50 +03:00
|
|
|
DEBUG_LOG("profiler_register_thread(%s)", aName);
|
|
|
|
|
2017-07-29 00:56:49 +03:00
|
|
|
MOZ_ASSERT_IF(NS_IsMainThread(), Scheduler::IsCooperativeThread());
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2014-07-26 04:52:00 +04:00
|
|
|
|
2014-03-29 00:18:24 +04:00
|
|
|
void* stackTop = GetStackTop(aGuessStackTop);
|
2017-03-07 08:54:56 +03:00
|
|
|
locked_register_thread(lock, aName, stackTop);
|
2013-03-29 23:34:49 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
|
|
|
profiler_unregister_thread()
|
2013-03-29 23:34:49 +04:00
|
|
|
{
|
2017-07-29 00:56:49 +03:00
|
|
|
MOZ_ASSERT_IF(NS_IsMainThread(), Scheduler::IsCooperativeThread());
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-04-27 00:36:17 +03:00
|
|
|
// We don't call ThreadInfo::StopJSSampling() here; there's no point doing
|
2017-03-31 03:01:50 +03:00
|
|
|
// that for a JS thread that is in the process of disappearing.
|
2017-03-08 03:37:00 +03:00
|
|
|
|
2017-03-24 01:24:45 +03:00
|
|
|
int i;
|
2017-04-03 03:40:23 +03:00
|
|
|
ThreadInfo* info = FindLiveThreadInfo(lock, &i);
|
2017-04-27 00:36:13 +03:00
|
|
|
MOZ_RELEASE_ASSERT(info == TLSInfo::Info(lock));
|
2017-03-24 01:24:45 +03:00
|
|
|
if (info) {
|
|
|
|
DEBUG_LOG("profiler_unregister_thread: %s", info->Name());
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::Exists(lock) && info->IsBeingProfiled()) {
|
2017-07-28 22:43:19 +03:00
|
|
|
info->NotifyUnregistered();
|
2017-04-21 06:28:23 +03:00
|
|
|
CorePS::DeadThreads(lock).push_back(info);
|
2017-03-24 01:24:45 +03:00
|
|
|
} else {
|
|
|
|
delete info;
|
2017-02-07 06:24:33 +03:00
|
|
|
}
|
2017-04-21 06:28:23 +03:00
|
|
|
CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(lock);
|
2017-04-03 03:40:23 +03:00
|
|
|
liveThreads.erase(liveThreads.begin() + i);
|
2017-03-31 03:01:50 +03:00
|
|
|
|
2017-04-27 00:36:13 +03:00
|
|
|
// Whether or not we just destroyed the ThreadInfo or transferred it to the
|
|
|
|
// dead thread vector, we no longer need to access it via TLS.
|
|
|
|
TLSInfo::SetInfo(lock, nullptr);
|
2017-03-31 03:01:50 +03:00
|
|
|
|
2017-03-24 01:24:45 +03:00
|
|
|
} else {
|
2017-04-27 00:36:13 +03:00
|
|
|
// There are two ways FindLiveThreadInfo() might have failed.
|
2017-03-24 01:24:45 +03:00
|
|
|
//
|
2017-04-27 00:36:13 +03:00
|
|
|
// - TLSInfo::Init() failed in locked_register_thread().
|
2017-03-24 01:24:45 +03:00
|
|
|
//
|
|
|
|
// - We've already called profiler_unregister_thread() for this thread.
|
|
|
|
// (Whether or not it should, this does happen in practice.)
|
|
|
|
//
|
2017-04-27 00:36:13 +03:00
|
|
|
// Either way, TLSInfo should be empty.
|
|
|
|
MOZ_RELEASE_ASSERT(!TLSInfo::Info(lock));
|
2017-02-07 06:24:33 +03:00
|
|
|
}
|
2013-03-29 23:34:49 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
2017-02-15 06:26:23 +03:00
|
|
|
profiler_thread_sleep()
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
{
|
2017-02-06 06:31:38 +03:00
|
|
|
// This function runs both on and off the main thread.
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2014-07-26 04:52:00 +04:00
|
|
|
|
2017-04-27 00:36:15 +03:00
|
|
|
RacyThreadInfo* racyInfo = TLSInfo::RacyInfo();
|
|
|
|
if (!racyInfo) {
|
2017-01-20 06:20:11 +03:00
|
|
|
return;
|
|
|
|
}
|
2017-04-27 00:36:15 +03:00
|
|
|
|
|
|
|
racyInfo->SetSleeping();
|
2014-03-29 00:08:22 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
2017-02-15 06:26:23 +03:00
|
|
|
profiler_thread_wake()
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
{
|
2017-02-06 06:31:38 +03:00
|
|
|
// This function runs both on and off the main thread.
|
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2014-07-26 04:52:00 +04:00
|
|
|
|
2017-04-27 00:36:15 +03:00
|
|
|
RacyThreadInfo* racyInfo = TLSInfo::RacyInfo();
|
|
|
|
if (!racyInfo) {
|
2017-01-20 06:20:11 +03:00
|
|
|
return;
|
|
|
|
}
|
2017-04-27 00:36:15 +03:00
|
|
|
|
|
|
|
racyInfo->SetAwake();
|
2014-03-29 00:08:22 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
bool
|
2017-02-15 06:26:23 +03:00
|
|
|
profiler_thread_is_sleeping()
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
{
|
2017-02-06 06:31:38 +03:00
|
|
|
MOZ_RELEASE_ASSERT(NS_IsMainThread());
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-02-06 06:31:38 +03:00
|
|
|
|
2017-04-27 00:36:15 +03:00
|
|
|
RacyThreadInfo* racyInfo = TLSInfo::RacyInfo();
|
|
|
|
if (!racyInfo) {
|
2016-09-07 23:28:50 +03:00
|
|
|
return false;
|
|
|
|
}
|
2017-04-27 00:36:15 +03:00
|
|
|
return racyInfo->IsSleeping();
|
2016-09-07 23:28:50 +03:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
Bug 1345262 (part 5) - Fix how JS sampling is started/stopped by the profiler. r=mstange,djvj.
Currently, JS sampling has major problems.
- JS sampling is enabled for all JS threads from the thread that runs
locked_profiler_start() -- currently only the main thread -- but the JS
engine can't handle enabling from off-thread, and asserts. This makes
profiling workers impossible in a debug build.
- No JS thread will be JS sampled unless enableJSSampling() is called, but that
only happens in locked_profiler_start(). That means any worker threads
created while the profiler is active won't be JS sampled.
- Only the thread that runs locked_profiler_stop() -- currently only the main
thread -- ever calls disableJSSampling(). This means that worker threads that
start being JS sampled never stop being JS sampled.
This patch fixes these three problems in the following ways.
- locked_profiler_start() now sets a flag in PseudoStack that indicates
JS sampling is desired, but doesn't directly enable it. Instead, the JS
thread polls that flag and enables JS sampling itself when it sees the flag
is set. The polling is done by the interrupt callback. There was already a
flag of this sort (mJSSampling) but the new one is better.
This required adding a call to profiler_js_operation_callback() to the
InterruptCallback() in XPCJSContext.cpp. (In comparison, the
InterruptCallback() in dom/workers/RuntimeService.cpp already had such a
call.)
- RegisterCurrentThread() now requests JS sampling of a JS thread when the
profiler is active, the thread is being profiled, and JS sampling is enabled.
- locked_profiler_stop() now calls stopJSSampling() on all live threads.
The patch makes the following smaller changes as well.
- Renames profiler_js_operation_callback() as profiler_js_interrupt_callback(),
because "interrupt callback" is the standard name (viz.
JS_AddInterruptCallback()).
- Calls js::RegisterContextProfilingEventMarker() with nullptr when stopping
JS sampling, so that ProfilerJSEventMarker won't fire unnecessarily.
- Some minor formatting changes.
--HG--
extra : rebase_source : 372f94c963a9e5b2493389892499b1ca205ebc2f
2017-03-10 01:04:23 +03:00
|
|
|
profiler_js_interrupt_callback()
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
{
|
2017-05-29 22:16:34 +03:00
|
|
|
// This function runs on JS threads being sampled.
|
2017-08-29 20:13:57 +03:00
|
|
|
PollJSSamplingForCurrentThread();
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
double
|
|
|
|
profiler_time()
|
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-02 02:41:48 +03:00
|
|
|
TimeDuration delta = TimeStamp::Now() - CorePS::ProcessStartTime();
|
2017-02-27 05:52:58 +03:00
|
|
|
return delta.ToMilliseconds();
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
UniqueProfilerBacktrace
|
|
|
|
profiler_get_backtrace()
|
2013-09-25 19:28:34 +04:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-01-25 08:00:47 +03:00
|
|
|
|
2017-04-21 06:27:53 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
2013-09-25 19:28:34 +04:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (!ActivePS::Exists(lock) || ActivePS::FeaturePrivacy(lock)) {
|
2013-09-25 19:28:34 +04:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
2017-04-27 00:36:17 +03:00
|
|
|
ThreadInfo* info = TLSInfo::Info(lock);
|
|
|
|
if (!info) {
|
|
|
|
MOZ_ASSERT(info);
|
2017-02-09 01:02:41 +03:00
|
|
|
return nullptr;
|
|
|
|
}
|
2017-03-31 02:49:36 +03:00
|
|
|
|
2017-10-04 07:04:24 +03:00
|
|
|
int tid = Thread::GetCurrentId();
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
TimeStamp now = TimeStamp::Now();
|
2017-06-16 07:22:57 +03:00
|
|
|
|
2017-06-19 02:38:15 +03:00
|
|
|
Registers regs;
|
2017-03-27 09:04:56 +03:00
|
|
|
#if defined(HAVE_NATIVE_UNWIND)
|
2017-06-19 02:38:15 +03:00
|
|
|
regs.SyncPopulate();
|
2017-06-20 01:45:43 +03:00
|
|
|
#else
|
|
|
|
regs.Clear();
|
2017-02-09 01:02:41 +03:00
|
|
|
#endif
|
|
|
|
|
2017-06-21 13:33:00 +03:00
|
|
|
// 1000 should be plenty for a single backtrace.
|
|
|
|
auto buffer = MakeUnique<ProfileBuffer>(1000);
|
2017-06-19 02:38:15 +03:00
|
|
|
|
2017-07-13 04:05:34 +03:00
|
|
|
DoSyncSample(lock, *info, now, regs, *buffer.get());
|
2017-02-09 01:02:41 +03:00
|
|
|
|
2017-04-04 02:41:53 +03:00
|
|
|
return UniqueProfilerBacktrace(
|
2017-06-16 02:43:16 +03:00
|
|
|
new ProfilerBacktrace("SyncProfile", tid, Move(buffer)));
|
2013-09-25 19:28:34 +04:00
|
|
|
}
|
|
|
|
|
2017-01-06 17:21:01 +03:00
|
|
|
void
|
|
|
|
ProfilerBacktraceDestructor::operator()(ProfilerBacktrace* aBacktrace)
|
2013-09-25 19:28:34 +04:00
|
|
|
{
|
|
|
|
delete aBacktrace;
|
|
|
|
}
|
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
static void
|
2017-06-01 06:33:22 +03:00
|
|
|
racy_profiler_add_marker(const char* aMarkerName,
|
2017-06-16 03:51:05 +03:00
|
|
|
UniquePtr<ProfilerMarkerPayload> aPayload)
|
2013-09-27 20:08:45 +04:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-01 06:33:22 +03:00
|
|
|
// We don't assert that RacyFeatures::IsActiveWithoutPrivacy() is true here,
|
|
|
|
// because it's possible that the result has changed since we tested it in
|
|
|
|
// the caller.
|
|
|
|
//
|
|
|
|
// Because of this imprecision it's possible to miss a marker or record one
|
|
|
|
// we shouldn't. Either way is not a big deal.
|
|
|
|
|
2017-04-27 00:36:15 +03:00
|
|
|
RacyThreadInfo* racyInfo = TLSInfo::RacyInfo();
|
|
|
|
if (!racyInfo) {
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-06-16 03:51:05 +03:00
|
|
|
TimeStamp origin = (aPayload && !aPayload->GetStartTime().IsNull())
|
|
|
|
? aPayload->GetStartTime()
|
2017-06-02 02:41:48 +03:00
|
|
|
: TimeStamp::Now();
|
|
|
|
TimeDuration delta = origin - CorePS::ProcessStartTime();
|
2017-06-16 03:51:05 +03:00
|
|
|
racyInfo->AddPendingMarker(aMarkerName, Move(aPayload),
|
2017-04-27 00:36:15 +03:00
|
|
|
delta.ToMilliseconds());
|
2013-09-27 20:08:45 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
2017-06-16 05:26:26 +03:00
|
|
|
profiler_add_marker(const char* aMarkerName,
|
|
|
|
UniquePtr<ProfilerMarkerPayload> aPayload)
|
2014-04-22 22:13:00 +04:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-16 05:26:26 +03:00
|
|
|
// This function is hot enough that we use RacyFeatures, not ActivePS.
|
2017-06-01 06:33:22 +03:00
|
|
|
if (!RacyFeatures::IsActiveWithoutPrivacy()) {
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-06-16 05:26:26 +03:00
|
|
|
racy_profiler_add_marker(aMarkerName, Move(aPayload));
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
profiler_add_marker(const char* aMarkerName)
|
|
|
|
{
|
|
|
|
profiler_add_marker(aMarkerName, nullptr);
|
2014-04-22 22:13:00 +04:00
|
|
|
}
|
|
|
|
|
Bug 1332577 (part 9) - Remove all mozilla_sampler_*() functions. r=mstange.
There are lots of profiler_*() functions that simply call onto equivalent or
nearly-equivalent mozilla_sampler_*() functions. This patch removes the
unnecessary indirection by removing the mozilla_sampler_*() functions.
The most important changes:
- In platform.cpp, all the mozilla_sampler_*() definitions are renamed as
profiler_*().
- In GeckoProfiler.h, the new PROFILER_FUNC{,_VOID} macros provide a neat way
to declare the functions that must be present whether the profiler is enabled
or not.
- In GeckoProfiler.h, all the mozilla_sampler_*() declarations are removed, as
are all the profiler_*() definitions that corresponded to a
mozilla_sampler_*() function.
Other things of note:
- profiler_log(const char* str) is now defined in platform.cpp, instead of in
GeckoProfiler.h, for consistency with all the other profiler_*() functions.
Likewise with profiler_js_operation_callback() and
profiler_in_privacy_mode().
- ProfilerBacktraceDestructor::operator() is treated slightly different to all
the profiler_*() functions.
- Both variants of profiler_tracing() got some early-return conditions moved
into them from GeckoProfiler.h.
- There were some cases where the profiler_*() and mozilla_sampler_*() name
didn't quite match. Specifically:
* mozilla_sampler_get_profile_data() and profiler_get_profiler_jsobject():
name mismatch. Kept the latter.
* mozilla_sampler_get_profile_data_async() and
profiler_get_profile_jsobject_async(): name mismatch. Kept the latter.
* mozilla_sampler_register_thread() and profiler_register_thread(): return
type mismatch. Changed to void.
* mozilla_sampler_frame_number() and profiler_set_frame_number(): name
mismatch. Kept the latter.
* mozilla_sampler_save_profile_to_file() and
profile_sampler_save_profile_to_file(): the former was 'extern "C"' so it
could be called from a debugger easily. The latter now is 'extern "C"'.
- profiler_get_buffer_info() didn't fit the patterns handled by
PROFILER_FUNC{,VOID}, so the patch makes it call onto the new function
profiler_get_buffer_info_helper(), which does fit the pattern.
--HG--
extra : rebase_source : fa1817854ade81e8a3027907d1476ff2563f1cc2
2017-01-20 07:05:16 +03:00
|
|
|
void
|
2017-06-01 03:21:58 +03:00
|
|
|
profiler_tracing(const char* aCategory, const char* aMarkerName,
|
|
|
|
TracingKind aKind)
|
2013-12-11 01:34:19 +04:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2013-12-11 01:34:19 +04:00
|
|
|
|
2017-06-01 06:33:22 +03:00
|
|
|
// This function is hot enough that we use RacyFeatures, notActivePS.
|
|
|
|
if (!RacyFeatures::IsActiveWithoutPrivacy()) {
|
2013-12-11 01:34:19 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-06-19 06:32:32 +03:00
|
|
|
auto payload = MakeUnique<TracingMarkerPayload>(aCategory, aKind);
|
2017-06-16 03:51:05 +03:00
|
|
|
racy_profiler_add_marker(aMarkerName, Move(payload));
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
}
|
2013-12-11 01:34:19 +04:00
|
|
|
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
void
|
2017-06-01 03:21:58 +03:00
|
|
|
profiler_tracing(const char* aCategory, const char* aMarkerName,
|
2017-10-03 11:48:10 +03:00
|
|
|
TracingKind aKind, UniqueProfilerBacktrace aCause)
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
Bug 1342306 (part 3) - Properly synchronize the global state in platform*.cpp. r=mstange.
This patch properly synchronizes all the global state in platform*.cpp, which
gets us a long way towards implementing bug 1330184.
- Most of the global state goes in a new class, ProfilerState, with a single
instance, gPS. All accesses to gPS are protected by gPSMutex. All functions
that access ProfilerState require a token proving that gPS is locked; this
makes things much clearer.
gRegisteredThreadsMutex is removed because it is subsumed by gPSMutex.
- gVerbosity, however, does not go in ProfilerState. It stays separate, and
gains its own mutex, gVerbosityMutex.
Also, the tracking of the current profiler state is streamlined. Previously it
was tracked via:
- stack_key_initialized, gInitCount, gSampler, gIsProfiling, gIsActive, and
gIsPaused.
Now it is tracked via:
- gPS, gPS->sActivity, and gPS->mIsPaused.
This means that the Sampler class is no longer necessary, and the patch removes
it.
Other changes of note made by the patch are as follows.
- It removes ThreadInfo::{mMutex,GetMutex}. This mutex was only used in two
places, and both these are now protected by gPSMutex.
- It tweaks the LOG calls. All the main functions (init(), shutdown(), start(),
stop()) now do consistent BEGIN/END logging, and a couple of other low-value
incidental LOG calls have been removed.
- It adds a lot of release assertions requiring that gPS be initialized (e.g.
profiler_init() has been called but profiler_shutdown() has not).
- It uses alphabetical order for everything involving profiler feature names.
- It removes Platform{Start,Stop}() and SamplerThread::{Start,Stop}Sampler().
These are no longer necessary now that SamplerThread::sInstance has been
replaced with ProfilerState::mSamplerThread which allows more direct access
to the current SamplerThread instance.
- It removes PseudoStack::mPrivacyMode. This was derived from the "privacy"
feature, and we now use gPS->mFeaturePrivacy directly, which is simpler.
It also replaces profiler_in_privacy_mode() with
profiler_is_active_and_not_in_privacy_mode(), which avoids an unnecessary
lock/unlock of gPSMutex on a moderately hot path.
Finally, the new code does more locking than the old one. A number of operation
The following operations now lock a mutex when they previously didn't; the
following are ones that are significant, according to some ad hoc profiling.
- profiler_tracing()
- profiler_is_active()
- profiler_is_active_and_not_in_privacy_mode()
- profiler_add_marker()
- profiler_feature_active()
- SamplerThread::Run() [when the profiler is paused]
All up this roughly doubles the amount of mutex locking done by the profiler.
It's probably possible to avoid this increase by allowing careful unlocked
access to three of the fields in ProfilerState (mActivityGeneration,
mFeaturePrivacy, mStartTime), but this should only be done as a follow-up if
the extra locking is found to be a problem.
--HG--
extra : rebase_source : c2e41231f131b3e9ccd23ddf43626b54ccc77b7b
2017-03-08 04:40:39 +03:00
|
|
|
|
2017-06-01 06:33:22 +03:00
|
|
|
// This function is hot enough that we use RacyFeatures, notActivePS.
|
|
|
|
if (!RacyFeatures::IsActiveWithoutPrivacy()) {
|
2013-12-11 01:34:19 +04:00
|
|
|
return;
|
|
|
|
}
|
2014-05-24 20:14:14 +04:00
|
|
|
|
2017-06-16 03:51:05 +03:00
|
|
|
auto payload =
|
2017-06-19 06:32:32 +03:00
|
|
|
MakeUnique<TracingMarkerPayload>(aCategory, aKind, Move(aCause));
|
2017-06-16 03:51:05 +03:00
|
|
|
racy_profiler_add_marker(aMarkerName, Move(payload));
|
2015-06-18 08:05:42 +03:00
|
|
|
}
|
|
|
|
|
2017-04-27 00:36:11 +03:00
|
|
|
PseudoStack*
|
|
|
|
profiler_get_pseudo_stack()
|
|
|
|
{
|
2017-05-10 13:13:21 +03:00
|
|
|
return TLSInfo::Stack();
|
2017-04-27 00:36:11 +03:00
|
|
|
}
|
|
|
|
|
2017-03-09 09:06:35 +03:00
|
|
|
void
|
|
|
|
profiler_set_js_context(JSContext* aCx)
|
|
|
|
{
|
|
|
|
MOZ_ASSERT(aCx);
|
|
|
|
|
2017-04-27 00:36:17 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
|
|
|
|
|
|
|
ThreadInfo* info = TLSInfo::Info(lock);
|
|
|
|
if (!info) {
|
2017-03-09 09:06:35 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-04-27 00:36:17 +03:00
|
|
|
info->SetJSContext(aCx);
|
2017-03-09 09:06:35 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
profiler_clear_js_context()
|
|
|
|
{
|
2017-04-21 06:28:23 +03:00
|
|
|
MOZ_RELEASE_ASSERT(CorePS::Exists());
|
2017-03-09 09:06:35 +03:00
|
|
|
|
2017-04-27 00:36:13 +03:00
|
|
|
PSAutoLock lock(gPSMutex);
|
|
|
|
|
|
|
|
ThreadInfo* info = TLSInfo::Info(lock);
|
2017-04-27 00:36:17 +03:00
|
|
|
if (!info || !info->mContext) {
|
2017-02-09 09:09:39 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-03-10 00:33:33 +03:00
|
|
|
// On JS shut down, flush the current buffer as stringifying JIT samples
|
|
|
|
// requires a live JSContext.
|
2017-02-09 09:09:39 +03:00
|
|
|
|
2017-04-21 06:28:23 +03:00
|
|
|
if (ActivePS::Exists(lock)) {
|
2017-03-24 01:24:45 +03:00
|
|
|
// Flush this thread's ThreadInfo, if it is being profiled.
|
2017-04-18 05:46:54 +03:00
|
|
|
if (info->IsBeingProfiled()) {
|
2017-07-13 04:05:34 +03:00
|
|
|
info->FlushSamplesAndMarkers(CorePS::ProcessStartTime(),
|
|
|
|
ActivePS::Buffer(lock));
|
2017-02-09 09:09:39 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-27 00:36:17 +03:00
|
|
|
// We don't call info->StopJSSampling() here; there's no point doing that for
|
|
|
|
// a JS thread that is in the process of disappearing.
|
Bug 1345262 (part 5) - Fix how JS sampling is started/stopped by the profiler. r=mstange,djvj.
Currently, JS sampling has major problems.
- JS sampling is enabled for all JS threads from the thread that runs
locked_profiler_start() -- currently only the main thread -- but the JS
engine can't handle enabling from off-thread, and asserts. This makes
profiling workers impossible in a debug build.
- No JS thread will be JS sampled unless enableJSSampling() is called, but that
only happens in locked_profiler_start(). That means any worker threads
created while the profiler is active won't be JS sampled.
- Only the thread that runs locked_profiler_stop() -- currently only the main
thread -- ever calls disableJSSampling(). This means that worker threads that
start being JS sampled never stop being JS sampled.
This patch fixes these three problems in the following ways.
- locked_profiler_start() now sets a flag in PseudoStack that indicates
JS sampling is desired, but doesn't directly enable it. Instead, the JS
thread polls that flag and enables JS sampling itself when it sees the flag
is set. The polling is done by the interrupt callback. There was already a
flag of this sort (mJSSampling) but the new one is better.
This required adding a call to profiler_js_operation_callback() to the
InterruptCallback() in XPCJSContext.cpp. (In comparison, the
InterruptCallback() in dom/workers/RuntimeService.cpp already had such a
call.)
- RegisterCurrentThread() now requests JS sampling of a JS thread when the
profiler is active, the thread is being profiled, and JS sampling is enabled.
- locked_profiler_stop() now calls stopJSSampling() on all live threads.
The patch makes the following smaller changes as well.
- Renames profiler_js_operation_callback() as profiler_js_interrupt_callback(),
because "interrupt callback" is the standard name (viz.
JS_AddInterruptCallback()).
- Calls js::RegisterContextProfilingEventMarker() with nullptr when stopping
JS sampling, so that ProfilerJSEventMarker won't fire unnecessarily.
- Some minor formatting changes.
--HG--
extra : rebase_source : 372f94c963a9e5b2493389892499b1ca205ebc2f
2017-03-10 01:04:23 +03:00
|
|
|
|
2017-04-27 00:36:17 +03:00
|
|
|
info->mContext = nullptr;
|
2017-02-09 09:09:39 +03:00
|
|
|
}
|
2017-02-07 06:15:30 +03:00
|
|
|
|
Bug 1357829 - Part 1: Expose profiler_suspend_and_sample_thread, r=njn
This patch performs a refactoring to the internals of the profiler in order to
expose a function, profiler_suspend_and_sample_thread, which can be called from a
background thread to suspend, sample the native stack, and then resume the
target passed-in thread.
The interface was designed to expose as few internals of the profiler as
possible, exposing only a single callback which accepts the list of program
counters and stack pointers collected during the backtrace.
A method `profiler_current_thread_id` was also added to get the thread_id of the
current thread, which can then be passed by another thread into
profiler_suspend_sample_thread to sample the stack of that thread.
This is implemented in two parts:
1) Splitting SamplerThread into two classes: Sampler, and SamplerThread.
Sampler was created to extract the core logic from SamplerThread which manages
unix signals on android and linux, as well as suspends the target thread on all
platforms. SamplerThread was then modified to subclass this type, adding the
extra methods and fields required for the creation and management of the actual
Sampler Thread.
Some work was done to ensure that the methods on Sampler would not require
ActivePS to be present, as we intend to sample threads when the profiler is not
active for the Background Hang Reporter.
2) Moving the Tick() logic into the TickController interface.
A TickController interface was added to platform which has 2 methods: Tick and
Backtrace. The Tick method replaces the previous Tick() static method, allowing
it to be overridden by a different consumer of SuspendAndSampleAndResumeThread,
while the Backtrace() method replaces the previous MergeStacksIntoProfile
method, allowing it to be overridden by different consumers of
DoNativeBacktrace.
This interface object is then used to wrap implementation specific data, such as
the ProfilerBuffer, and is threaded through the SuspendAndSampleAndResumeThread
and DoNativeBacktrace methods.
This change added 2 virtual calls to the SamplerThread's critical section, which
I believe should be a small enough overhead that it will not affect profiling
performance. These virtual calls could be avoided using templating, but I
decided that doing so would be unnecessary.
MozReview-Commit-ID: AT48xb2asgV
2017-05-02 22:36:35 +03:00
|
|
|
int
|
|
|
|
profiler_current_thread_id()
|
|
|
|
{
|
|
|
|
return Thread::GetCurrentId();
|
|
|
|
}
|
|
|
|
|
2017-07-25 09:47:14 +03:00
|
|
|
// NOTE: aCollector's methods will be called while the target thread is paused.
|
|
|
|
// Doing things in those methods like allocating -- which may try to claim
|
|
|
|
// locks -- is a surefire way to deadlock.
|
|
|
|
void
|
|
|
|
profiler_suspend_and_sample_thread(int aThreadId,
|
|
|
|
uint32_t aFeatures,
|
|
|
|
ProfilerStackCollector& aCollector,
|
|
|
|
bool aSampleNative /* = true */)
|
|
|
|
{
|
|
|
|
// Lock the profiler mutex
|
|
|
|
PSAutoLock lock(gPSMutex);
|
|
|
|
|
|
|
|
const CorePS::ThreadVector& liveThreads = CorePS::LiveThreads(lock);
|
|
|
|
for (uint32_t i = 0; i < liveThreads.size(); i++) {
|
|
|
|
ThreadInfo* info = liveThreads.at(i);
|
|
|
|
|
|
|
|
if (info->ThreadId() == aThreadId) {
|
|
|
|
if (info->IsMainThread()) {
|
|
|
|
aCollector.SetIsMainThread();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Allocate the space for the native stack
|
|
|
|
NativeStack nativeStack;
|
|
|
|
|
|
|
|
// Suspend, sample, and then resume the target thread.
|
|
|
|
Sampler sampler(lock);
|
|
|
|
sampler.SuspendAndSampleAndResumeThread(lock, *info,
|
|
|
|
[&](const Registers& aRegs) {
|
|
|
|
// The target thread is now suspended. Collect a native backtrace, and
|
|
|
|
// call the callback.
|
|
|
|
bool isSynchronous = false;
|
2017-08-04 21:08:28 +03:00
|
|
|
#if defined(HAVE_FASTINIT_NATIVE_UNWIND)
|
2017-07-25 09:47:14 +03:00
|
|
|
if (aSampleNative) {
|
2017-08-04 21:08:28 +03:00
|
|
|
// We can only use FramePointerStackWalk or MozStackWalk from
|
|
|
|
// suspend_and_sample_thread as other stackwalking methods may not be
|
|
|
|
// initialized.
|
|
|
|
# if defined(USE_FRAME_POINTER_STACK_WALK)
|
|
|
|
DoFramePointerBacktrace(lock, *info, aRegs, nativeStack);
|
|
|
|
# elif defined(USE_MOZ_STACK_WALK)
|
|
|
|
DoMozStackWalkBacktrace(lock, *info, aRegs, nativeStack);
|
|
|
|
# else
|
|
|
|
# error "Invalid configuration"
|
|
|
|
# endif
|
2017-07-25 09:47:14 +03:00
|
|
|
|
|
|
|
MergeStacks(aFeatures, isSynchronous, *info, aRegs, nativeStack,
|
|
|
|
aCollector);
|
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
MergeStacks(aFeatures, isSynchronous, *info, aRegs, nativeStack,
|
|
|
|
aCollector);
|
|
|
|
|
|
|
|
if (ProfilerFeature::HasLeaf(aFeatures)) {
|
|
|
|
aCollector.CollectNativeLeafAddr((void*)aRegs.mPC);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
// NOTE: Make sure to disable the sampler before it is destroyed, in case
|
|
|
|
// the profiler is running at the same time.
|
|
|
|
sampler.Disable(lock);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-10 00:33:33 +03:00
|
|
|
// END externally visible functions
|
|
|
|
////////////////////////////////////////////////////////////////////////
|