2014-05-05 21:30:39 +04:00
|
|
|
/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
|
|
|
|
/* vim: set ts=8 sts=2 et sw=2 tw=80: */
|
2012-05-21 15:12:37 +04:00
|
|
|
/* This Source Code Form is subject to the terms of the Mozilla Public
|
|
|
|
* License, v. 2.0. If a copy of the MPL was not distributed with this
|
|
|
|
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
|
2007-01-05 01:31:26 +03:00
|
|
|
|
|
|
|
//
|
|
|
|
// This file implements a garbage-cycle collector based on the paper
|
2012-11-07 05:38:29 +04:00
|
|
|
//
|
2007-01-05 01:31:26 +03:00
|
|
|
// Concurrent Cycle Collection in Reference Counted Systems
|
|
|
|
// Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
|
|
|
|
//
|
|
|
|
// We are not using the concurrent or acyclic cases of that paper; so
|
|
|
|
// the green, red and orange colors are not used.
|
|
|
|
//
|
|
|
|
// The collector is based on tracking pointers of four colors:
|
|
|
|
//
|
|
|
|
// Black nodes are definitely live. If we ever determine a node is
|
|
|
|
// black, it's ok to forget about, drop from our records.
|
|
|
|
//
|
|
|
|
// White nodes are definitely garbage cycles. Once we finish with our
|
|
|
|
// scanning, we unlink all the white nodes and expect that by
|
|
|
|
// unlinking them they will self-destruct (since a garbage cycle is
|
|
|
|
// only keeping itself alive with internal links, by definition).
|
|
|
|
//
|
2019-06-11 18:13:17 +03:00
|
|
|
// Snow-white is an addition to the original algorithm. A snow-white node
|
2013-07-09 21:30:58 +04:00
|
|
|
// has reference count zero and is just waiting for deletion.
|
|
|
|
//
|
2007-01-05 01:31:26 +03:00
|
|
|
// Grey nodes are being scanned. Nodes that turn grey will turn
|
|
|
|
// either black if we determine that they're live, or white if we
|
|
|
|
// determine that they're a garbage cycle. After the main collection
|
|
|
|
// algorithm there should be no grey nodes.
|
|
|
|
//
|
|
|
|
// Purple nodes are *candidates* for being scanned. They are nodes we
|
|
|
|
// haven't begun scanning yet because they're not old enough, or we're
|
|
|
|
// still partway through the algorithm.
|
|
|
|
//
|
|
|
|
// XPCOM objects participating in garbage-cycle collection are obliged
|
|
|
|
// to inform us when they ought to turn purple; that is, when their
|
|
|
|
// refcount transitions from N+1 -> N, for nonzero N. Furthermore we
|
|
|
|
// require that *after* an XPCOM object has informed us of turning
|
|
|
|
// purple, they will tell us when they either transition back to being
|
|
|
|
// black (incremented refcount) or are ultimately deleted.
|
|
|
|
|
2013-12-06 22:17:20 +04:00
|
|
|
// Incremental cycle collection
|
|
|
|
//
|
|
|
|
// Beyond the simple state machine required to implement incremental
|
|
|
|
// collection, the CC needs to be able to compensate for things the browser
|
|
|
|
// is doing during the collection. There are two kinds of problems. For each
|
|
|
|
// of these, there are two cases to deal with: purple-buffered C++ objects
|
|
|
|
// and JS objects.
|
|
|
|
|
|
|
|
// The first problem is that an object in the CC's graph can become garbage.
|
|
|
|
// This is bad because the CC touches the objects in its graph at every
|
|
|
|
// stage of its operation.
|
|
|
|
//
|
|
|
|
// All cycle collected C++ objects that die during a cycle collection
|
|
|
|
// will end up actually getting deleted by the SnowWhiteKiller. Before
|
|
|
|
// the SWK deletes an object, it checks if an ICC is running, and if so,
|
|
|
|
// if the object is in the graph. If it is, the CC clears mPointer and
|
|
|
|
// mParticipant so it does not point to the raw object any more. Because
|
|
|
|
// objects could die any time the CC returns to the mutator, any time the CC
|
|
|
|
// accesses a PtrInfo it must perform a null check on mParticipant to
|
|
|
|
// ensure the object has not gone away.
|
|
|
|
//
|
|
|
|
// JS objects don't always run finalizers, so the CC can't remove them from
|
|
|
|
// the graph when they die. Fortunately, JS objects can only die during a GC,
|
|
|
|
// so if a GC is begun during an ICC, the browser synchronously finishes off
|
|
|
|
// the ICC, which clears the entire CC graph. If the GC and CC are scheduled
|
|
|
|
// properly, this should be rare.
|
|
|
|
//
|
|
|
|
// The second problem is that objects in the graph can be changed, say by
|
|
|
|
// being addrefed or released, or by having a field updated, after the object
|
2013-12-18 07:29:57 +04:00
|
|
|
// has been added to the graph. The problem is that ICC can miss a newly
|
|
|
|
// created reference to an object, and end up unlinking an object that is
|
|
|
|
// actually alive.
|
|
|
|
//
|
|
|
|
// The basic idea of the solution, from "An on-the-fly Reference Counting
|
|
|
|
// Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
|
|
|
|
// object has had an additional reference to it created during the collection,
|
|
|
|
// and if so, don't collect it during the current collection. This avoids having
|
|
|
|
// to rerun the scan as in Bacon & Rajan 2001.
|
|
|
|
//
|
|
|
|
// For cycle collected C++ objects, we modify AddRef to place the object in
|
|
|
|
// the purple buffer, in addition to Release. Then, in the CC, we treat any
|
|
|
|
// objects in the purple buffer as being alive, after graph building has
|
|
|
|
// completed. Because they are in the purple buffer, they will be suspected
|
|
|
|
// in the next CC, so there's no danger of leaks. This is imprecise, because
|
|
|
|
// we will treat as live an object that has been Released but not AddRefed
|
|
|
|
// during graph building, but that's probably rare enough that the additional
|
|
|
|
// bookkeeping overhead is not worthwhile.
|
|
|
|
//
|
|
|
|
// For JS objects, the cycle collector is only looking at gray objects. If a
|
|
|
|
// gray object is touched during ICC, it will be made black by UnmarkGray.
|
|
|
|
// Thus, if a JS object has become black during the ICC, we treat it as live.
|
|
|
|
// Merged JS zones have to be handled specially: we scan all zone globals.
|
|
|
|
// If any are black, we treat the zone as being black.
|
2013-12-06 22:17:20 +04:00
|
|
|
|
|
|
|
// Safety
|
2007-01-05 01:31:26 +03:00
|
|
|
//
|
|
|
|
// An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
|
|
|
|
// purple-unsafe.
|
|
|
|
//
|
2013-05-02 02:35:13 +04:00
|
|
|
// An nsISupports object is scan-safe if:
|
2007-01-05 01:31:26 +03:00
|
|
|
//
|
2013-05-02 02:35:13 +04:00
|
|
|
// - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
|
|
|
|
// this operation loses ISupports identity (like nsIClassInfo).
|
|
|
|
// - Additionally, the operation |traverse| on the resulting
|
2007-05-24 18:10:02 +04:00
|
|
|
// nsXPCOMCycleCollectionParticipant does not cause *any* refcount
|
2007-01-05 01:31:26 +03:00
|
|
|
// adjustment to occur (no AddRef / Release calls).
|
|
|
|
//
|
2013-05-02 02:35:13 +04:00
|
|
|
// A non-nsISupports ("native") object is scan-safe by explicitly
|
|
|
|
// providing its nsCycleCollectionParticipant.
|
|
|
|
//
|
2007-01-05 01:31:26 +03:00
|
|
|
// An object is purple-safe if it satisfies the following properties:
|
|
|
|
//
|
2013-05-04 19:39:44 +04:00
|
|
|
// - The object is scan-safe.
|
2007-01-05 01:31:26 +03:00
|
|
|
//
|
|
|
|
// When we receive a pointer |ptr| via
|
|
|
|
// |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
|
|
|
|
// can check the scan-safety, but have no way to ensure the
|
|
|
|
// purple-safety; objects must obey, or else the entire system falls
|
|
|
|
// apart. Don't involve an object in this scheme if you can't
|
2013-05-02 02:35:13 +04:00
|
|
|
// guarantee its purple-safety. The easiest way to ensure that an
|
|
|
|
// object is purple-safe is to use nsCycleCollectingAutoRefCnt.
|
2007-01-05 01:31:26 +03:00
|
|
|
//
|
|
|
|
// When we have a scannable set of purple nodes ready, we begin
|
|
|
|
// our walks. During the walks, the nodes we |traverse| should only
|
|
|
|
// feed us more scan-safe nodes, and should not adjust the refcounts
|
2013-05-04 19:39:44 +04:00
|
|
|
// of those nodes.
|
2007-01-05 01:31:26 +03:00
|
|
|
//
|
|
|
|
// We do not |AddRef| or |Release| any objects during scanning. We
|
2013-05-02 02:35:13 +04:00
|
|
|
// rely on the purple-safety of the roots that call |suspect| to
|
|
|
|
// hold, such that we will clear the pointer from the purple buffer
|
|
|
|
// entry to the object before it is destroyed. The pointers that are
|
|
|
|
// merely scan-safe we hold only for the duration of scanning, and
|
|
|
|
// there should be no objects released from the scan-safe set during
|
|
|
|
// the scan.
|
2007-01-05 01:31:26 +03:00
|
|
|
//
|
2013-07-09 21:30:58 +04:00
|
|
|
// We *do* call |Root| and |Unroot| on every white object, on
|
2007-01-05 01:31:26 +03:00
|
|
|
// either side of the calls to |Unlink|. This keeps the set of white
|
|
|
|
// objects alive during the unlinking.
|
2013-05-04 19:39:44 +04:00
|
|
|
//
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2011-05-01 22:59:24 +04:00
|
|
|
#if !defined(__MINGW32__)
|
2007-01-05 01:31:26 +03:00
|
|
|
# ifdef WIN32
|
|
|
|
# include <crtdbg.h>
|
|
|
|
# include <errno.h>
|
|
|
|
# endif
|
2007-01-23 03:39:25 +03:00
|
|
|
#endif
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2011-10-07 00:22:43 +04:00
|
|
|
#include "base/process_util.h"
|
2011-10-11 09:50:08 +04:00
|
|
|
|
2013-12-09 06:52:54 +04:00
|
|
|
#include "mozilla/ArrayUtils.h"
|
2013-11-21 02:35:16 +04:00
|
|
|
#include "mozilla/AutoRestore.h"
|
2016-09-14 16:47:32 +03:00
|
|
|
#include "mozilla/CycleCollectedJSContext.h"
|
2017-02-24 00:23:45 +03:00
|
|
|
#include "mozilla/CycleCollectedJSRuntime.h"
|
2015-01-18 21:59:21 +03:00
|
|
|
#include "mozilla/DebugOnly.h"
|
2018-09-04 23:22:37 +03:00
|
|
|
#include "mozilla/HashFunctions.h"
|
2018-08-14 02:25:51 +03:00
|
|
|
#include "mozilla/HashTable.h"
|
2014-01-15 00:23:59 +04:00
|
|
|
#include "mozilla/HoldDropJSObjects.h"
|
2011-10-11 09:50:08 +04:00
|
|
|
/* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
|
2013-11-10 01:15:44 +04:00
|
|
|
#include "mozilla/LinkedList.h"
|
2014-12-09 01:45:13 +03:00
|
|
|
#include "mozilla/MemoryReporting.h"
|
2017-05-05 00:49:22 +03:00
|
|
|
#include "mozilla/Move.h"
|
2018-09-13 04:20:33 +03:00
|
|
|
#include "mozilla/MruCache.h"
|
2014-12-09 01:45:13 +03:00
|
|
|
#include "mozilla/SegmentedVector.h"
|
2011-10-11 09:50:08 +04:00
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
#include "nsCycleCollectionParticipant.h"
|
2013-05-21 00:08:11 +04:00
|
|
|
#include "nsCycleCollectionNoteRootCallback.h"
|
2007-01-05 01:31:26 +03:00
|
|
|
#include "nsDeque.h"
|
2017-10-10 12:59:39 +03:00
|
|
|
#include "nsExceptionHandler.h"
|
2007-01-05 01:31:26 +03:00
|
|
|
#include "nsCycleCollector.h"
|
|
|
|
#include "nsThreadUtils.h"
|
2014-03-26 20:57:38 +04:00
|
|
|
#include "nsXULAppAPI.h"
|
2007-01-05 01:31:26 +03:00
|
|
|
#include "prenv.h"
|
2007-03-06 00:07:51 +03:00
|
|
|
#include "nsPrintfCString.h"
|
2007-03-27 13:49:06 +04:00
|
|
|
#include "nsTArray.h"
|
2008-01-07 01:05:10 +03:00
|
|
|
#include "nsIConsoleService.h"
|
2012-06-06 03:51:58 +04:00
|
|
|
#include "mozilla/Attributes.h"
|
2010-08-12 04:03:23 +04:00
|
|
|
#include "nsICycleCollectorListener.h"
|
2017-06-01 23:44:20 +03:00
|
|
|
#include "nsISerialEventTarget.h"
|
2011-07-09 02:49:31 +04:00
|
|
|
#include "nsIMemoryReporter.h"
|
2012-10-16 06:12:14 +04:00
|
|
|
#include "nsIFile.h"
|
2014-03-20 11:29:51 +04:00
|
|
|
#include "nsDumpUtils.h"
|
2011-03-29 00:05:48 +04:00
|
|
|
#include "xpcpublic.h"
|
2013-03-18 18:25:50 +04:00
|
|
|
#include "GeckoProfiler.h"
|
2013-07-30 18:25:31 +04:00
|
|
|
#include <stdint.h>
|
2007-01-05 01:31:26 +03:00
|
|
|
#include <stdio.h>
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2015-07-17 22:51:39 +03:00
|
|
|
#include "mozilla/AutoGlobalTimelineMarker.h"
|
2012-10-26 17:32:10 +04:00
|
|
|
#include "mozilla/Likely.h"
|
2013-11-05 16:45:20 +04:00
|
|
|
#include "mozilla/PoisonIOInterposer.h"
|
2011-06-21 01:47:58 +04:00
|
|
|
#include "mozilla/Telemetry.h"
|
2013-08-20 15:48:22 +04:00
|
|
|
#include "mozilla/ThreadLocal.h"
|
2010-11-12 01:52:30 +03:00
|
|
|
|
|
|
|
using namespace mozilla;
|
|
|
|
|
2017-08-22 00:01:47 +03:00
|
|
|
struct NurseryPurpleBufferEntry {
|
|
|
|
void* mPtr;
|
|
|
|
nsCycleCollectionParticipant* mParticipant;
|
|
|
|
nsCycleCollectingAutoRefCnt* mRefCnt;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define NURSERY_PURPLE_BUFFER_SIZE 2048
|
|
|
|
bool gNurseryPurpleBufferEnabled = true;
|
|
|
|
NurseryPurpleBufferEntry gNurseryPurpleBufferEntry[NURSERY_PURPLE_BUFFER_SIZE];
|
|
|
|
uint32_t gNurseryPurpleBufferEntryCount = 0;
|
|
|
|
|
|
|
|
void ClearNurseryPurpleBuffer();
|
|
|
|
|
2019-02-25 04:35:59 +03:00
|
|
|
static void SuspectUsingNurseryPurpleBuffer(
|
|
|
|
void* aPtr, nsCycleCollectionParticipant* aCp,
|
|
|
|
nsCycleCollectingAutoRefCnt* aRefCnt) {
|
2017-08-22 00:01:47 +03:00
|
|
|
MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
|
|
|
|
MOZ_ASSERT(gNurseryPurpleBufferEnabled);
|
|
|
|
if (gNurseryPurpleBufferEntryCount == NURSERY_PURPLE_BUFFER_SIZE) {
|
|
|
|
ClearNurseryPurpleBuffer();
|
|
|
|
}
|
|
|
|
|
|
|
|
gNurseryPurpleBufferEntry[gNurseryPurpleBufferEntryCount] = {aPtr, aCp,
|
|
|
|
aRefCnt};
|
|
|
|
++gNurseryPurpleBufferEntryCount;
|
|
|
|
}
|
|
|
|
|
2010-07-16 05:08:47 +04:00
|
|
|
//#define COLLECT_TIME_DEBUG
|
|
|
|
|
2013-04-25 19:42:44 +04:00
|
|
|
// Enable assertions that are useful for diagnosing errors in graph
|
|
|
|
// construction.
|
|
|
|
//#define DEBUG_CC_GRAPH
|
|
|
|
|
2009-10-23 08:47:27 +04:00
|
|
|
#define DEFAULT_SHUTDOWN_COLLECTIONS 5
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2013-05-24 22:26:09 +04:00
|
|
|
// One to do the freeing, then another to detect there is no more work to do.
|
|
|
|
#define NORMAL_SHUTDOWN_COLLECTIONS 2
|
|
|
|
|
2012-12-20 02:35:50 +04:00
|
|
|
// Cycle collector environment variables
|
|
|
|
//
|
2014-02-21 02:27:03 +04:00
|
|
|
// MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps.
|
2012-12-20 02:35:50 +04:00
|
|
|
//
|
2014-02-21 02:27:03 +04:00
|
|
|
// MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
|
2012-12-20 02:35:50 +04:00
|
|
|
//
|
2014-02-21 02:27:04 +04:00
|
|
|
// MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread
|
|
|
|
// CCs. If set to "worker", only automatically log worker CCs. If set to "all",
|
|
|
|
// log either. The default value is "all". This must be used with either
|
|
|
|
// MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
|
|
|
|
//
|
2014-03-26 20:57:38 +04:00
|
|
|
// MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process
|
|
|
|
// CCs. If set to "content", only automatically log tab CCs. If set to
|
|
|
|
// "plugins", only automatically log plugin CCs. If set to "all", log
|
|
|
|
// everything. The default value is "all". This must be used with either
|
|
|
|
// MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
|
|
|
|
//
|
2014-04-25 01:28:49 +04:00
|
|
|
// MOZ_CC_ALL_TRACES: If set to "all", any cycle collector
|
|
|
|
// logging done will be WantAllTraces, which disables
|
2012-12-20 02:35:50 +04:00
|
|
|
// various cycle collector optimizations to give a fuller picture of
|
2014-04-25 01:28:49 +04:00
|
|
|
// the heap. If set to "shutdown", only shutdown logging will be WantAllTraces.
|
|
|
|
// The default is none.
|
2012-12-20 02:35:50 +04:00
|
|
|
//
|
2014-02-21 02:27:03 +04:00
|
|
|
// MOZ_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
|
2013-02-27 01:34:32 +04:00
|
|
|
// run cycle collections at shutdown.
|
2013-02-22 06:10:59 +04:00
|
|
|
//
|
|
|
|
// MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
|
2014-02-21 02:27:03 +04:00
|
|
|
// logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses
|
2013-02-22 06:10:59 +04:00
|
|
|
// of nsICycleCollectorListener)
|
2012-09-28 21:11:33 +04:00
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
// Various parameters of this collector can be tuned using environment
|
|
|
|
// variables.
|
|
|
|
|
|
|
|
struct nsCycleCollectorParams {
|
2014-05-05 21:30:39 +04:00
|
|
|
bool mLogAll;
|
|
|
|
bool mLogShutdown;
|
|
|
|
bool mAllTracesAll;
|
|
|
|
bool mAllTracesShutdown;
|
|
|
|
bool mLogThisThread;
|
|
|
|
|
|
|
|
nsCycleCollectorParams()
|
2014-05-13 21:41:38 +04:00
|
|
|
: mLogAll(PR_GetEnv("MOZ_CC_LOG_ALL") != nullptr),
|
|
|
|
mLogShutdown(PR_GetEnv("MOZ_CC_LOG_SHUTDOWN") != nullptr),
|
2014-05-05 21:30:39 +04:00
|
|
|
mAllTracesAll(false),
|
|
|
|
mAllTracesShutdown(false) {
|
|
|
|
const char* logThreadEnv = PR_GetEnv("MOZ_CC_LOG_THREAD");
|
|
|
|
bool threadLogging = true;
|
|
|
|
if (logThreadEnv && !!strcmp(logThreadEnv, "all")) {
|
|
|
|
if (NS_IsMainThread()) {
|
|
|
|
threadLogging = !strcmp(logThreadEnv, "main");
|
|
|
|
} else {
|
|
|
|
threadLogging = !strcmp(logThreadEnv, "worker");
|
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
2014-02-21 02:27:03 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
const char* logProcessEnv = PR_GetEnv("MOZ_CC_LOG_PROCESS");
|
|
|
|
bool processLogging = true;
|
|
|
|
if (logProcessEnv && !!strcmp(logProcessEnv, "all")) {
|
|
|
|
switch (XRE_GetProcessType()) {
|
|
|
|
case GeckoProcessType_Default:
|
|
|
|
processLogging = !strcmp(logProcessEnv, "main");
|
|
|
|
break;
|
|
|
|
case GeckoProcessType_Plugin:
|
|
|
|
processLogging = !strcmp(logProcessEnv, "plugins");
|
|
|
|
break;
|
|
|
|
case GeckoProcessType_Content:
|
|
|
|
processLogging = !strcmp(logProcessEnv, "content");
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
processLogging = false;
|
|
|
|
break;
|
|
|
|
}
|
2014-02-21 02:27:03 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
mLogThisThread = threadLogging && processLogging;
|
2014-04-25 01:28:49 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
const char* allTracesEnv = PR_GetEnv("MOZ_CC_ALL_TRACES");
|
|
|
|
if (allTracesEnv) {
|
|
|
|
if (!strcmp(allTracesEnv, "all")) {
|
|
|
|
mAllTracesAll = true;
|
|
|
|
} else if (!strcmp(allTracesEnv, "shutdown")) {
|
|
|
|
mAllTracesShutdown = true;
|
|
|
|
}
|
2014-04-25 01:28:49 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
bool LogThisCC(bool aIsShutdown) {
|
|
|
|
return (mLogAll || (aIsShutdown && mLogShutdown)) && mLogThisThread;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool AllTracesThisCC(bool aIsShutdown) {
|
|
|
|
return mAllTracesAll || (aIsShutdown && mAllTracesShutdown);
|
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
};
|
|
|
|
|
2012-02-12 20:02:01 +04:00
|
|
|
#ifdef COLLECT_TIME_DEBUG
|
|
|
|
class TimeLog {
|
|
|
|
public:
|
2014-05-13 21:41:38 +04:00
|
|
|
TimeLog() : mLastCheckpoint(TimeStamp::Now()) {}
|
2012-02-12 20:02:01 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void Checkpoint(const char* aEvent) {
|
|
|
|
TimeStamp now = TimeStamp::Now();
|
2014-05-07 04:25:27 +04:00
|
|
|
double dur = (now - mLastCheckpoint).ToMilliseconds();
|
|
|
|
if (dur >= 0.5) {
|
|
|
|
printf("cc: %s took %.1fms\n", aEvent, dur);
|
2012-02-12 20:02:01 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
mLastCheckpoint = now;
|
|
|
|
}
|
2012-02-12 20:02:01 +04:00
|
|
|
|
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
TimeStamp mLastCheckpoint;
|
2012-02-12 20:02:01 +04:00
|
|
|
};
|
|
|
|
#else
|
|
|
|
class TimeLog {
|
|
|
|
public:
|
2014-05-13 21:41:38 +04:00
|
|
|
TimeLog() {}
|
|
|
|
void Checkpoint(const char* aEvent) {}
|
2012-02-12 20:02:01 +04:00
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// Base types
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2017-09-15 22:35:18 +03:00
|
|
|
class PtrInfo;
|
2007-04-26 01:12:11 +04:00
|
|
|
|
|
|
|
class EdgePool {
|
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
// EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
|
|
|
|
// However, at the end of a block, the last two pointers are a null
|
|
|
|
// and then a void** pointing to the next block. This allows
|
|
|
|
// EdgePool::Iterators to be a single word but still capable of crossing
|
|
|
|
// block boundaries.
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
EdgePool() {
|
|
|
|
mSentinelAndBlocks[0].block = nullptr;
|
|
|
|
mSentinelAndBlocks[1].block = nullptr;
|
|
|
|
}
|
2008-01-10 17:10:03 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
~EdgePool() {
|
|
|
|
MOZ_ASSERT(!mSentinelAndBlocks[0].block && !mSentinelAndBlocks[1].block,
|
|
|
|
"Didn't call Clear()?");
|
|
|
|
}
|
2008-01-10 17:10:03 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void Clear() {
|
2016-06-01 02:18:33 +03:00
|
|
|
EdgeBlock* b = EdgeBlocks();
|
2014-05-05 21:30:39 +04:00
|
|
|
while (b) {
|
2016-06-01 02:18:33 +03:00
|
|
|
EdgeBlock* next = b->Next();
|
2014-05-05 21:30:39 +04:00
|
|
|
delete b;
|
|
|
|
b = next;
|
2007-04-26 01:12:11 +04:00
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mSentinelAndBlocks[0].block = nullptr;
|
|
|
|
mSentinelAndBlocks[1].block = nullptr;
|
|
|
|
}
|
|
|
|
|
2013-12-06 22:17:20 +04:00
|
|
|
#ifdef DEBUG
|
2014-05-05 21:30:39 +04:00
|
|
|
bool IsEmpty() {
|
|
|
|
return !mSentinelAndBlocks[0].block && !mSentinelAndBlocks[1].block;
|
|
|
|
}
|
2013-12-06 22:17:20 +04:00
|
|
|
#endif
|
|
|
|
|
2007-04-26 01:12:11 +04:00
|
|
|
private:
|
2016-06-01 02:18:33 +03:00
|
|
|
struct EdgeBlock;
|
2014-08-25 23:17:15 +04:00
|
|
|
union PtrInfoOrBlock {
|
2014-05-05 21:30:39 +04:00
|
|
|
// Use a union to avoid reinterpret_cast and the ensuing
|
|
|
|
// potential aliasing bugs.
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* ptrInfo;
|
2016-06-01 02:18:33 +03:00
|
|
|
EdgeBlock* block;
|
2014-05-05 21:30:39 +04:00
|
|
|
};
|
2016-06-01 02:18:33 +03:00
|
|
|
struct EdgeBlock {
|
|
|
|
enum { EdgeBlockSize = 16 * 1024 };
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2016-06-01 02:18:33 +03:00
|
|
|
PtrInfoOrBlock mPointers[EdgeBlockSize];
|
|
|
|
EdgeBlock() {
|
|
|
|
mPointers[EdgeBlockSize - 2].block = nullptr; // sentinel
|
|
|
|
mPointers[EdgeBlockSize - 1].block = nullptr; // next block pointer
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2016-06-01 02:18:33 +03:00
|
|
|
EdgeBlock*& Next() { return mPointers[EdgeBlockSize - 1].block; }
|
2014-05-05 21:30:39 +04:00
|
|
|
PtrInfoOrBlock* Start() { return &mPointers[0]; }
|
2016-06-01 02:18:33 +03:00
|
|
|
PtrInfoOrBlock* End() { return &mPointers[EdgeBlockSize - 2]; }
|
2014-05-05 21:30:39 +04:00
|
|
|
};
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// Store the null sentinel so that we can have valid iterators
|
|
|
|
// before adding any edges and without adding any blocks.
|
|
|
|
PtrInfoOrBlock mSentinelAndBlocks[2];
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
EdgeBlock*& EdgeBlocks() { return mSentinelAndBlocks[1].block; }
|
|
|
|
EdgeBlock* EdgeBlocks() const { return mSentinelAndBlocks[1].block; }
|
2007-04-26 01:12:11 +04:00
|
|
|
|
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
class Iterator {
|
|
|
|
public:
|
2014-08-25 23:17:15 +04:00
|
|
|
Iterator() : mPointer(nullptr) {}
|
|
|
|
explicit Iterator(PtrInfoOrBlock* aPointer) : mPointer(aPointer) {}
|
|
|
|
Iterator(const Iterator& aOther) : mPointer(aOther.mPointer) {}
|
2014-05-05 21:30:39 +04:00
|
|
|
|
|
|
|
Iterator& operator++() {
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!mPointer->ptrInfo) {
|
2014-05-05 21:30:39 +04:00
|
|
|
// Null pointer is a sentinel for link to the next block.
|
|
|
|
mPointer = (mPointer + 1)->block->mPointers;
|
|
|
|
}
|
|
|
|
++mPointer;
|
|
|
|
return *this;
|
|
|
|
}
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
PtrInfo* operator*() const {
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!mPointer->ptrInfo) {
|
2014-05-05 21:30:39 +04:00
|
|
|
// Null pointer is a sentinel for link to the next block.
|
|
|
|
return (mPointer + 1)->block->mPointers->ptrInfo;
|
|
|
|
}
|
|
|
|
return mPointer->ptrInfo;
|
|
|
|
}
|
|
|
|
bool operator==(const Iterator& aOther) const {
|
2014-05-13 21:41:38 +04:00
|
|
|
return mPointer == aOther.mPointer;
|
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
bool operator!=(const Iterator& aOther) const {
|
2014-05-13 21:41:38 +04:00
|
|
|
return mPointer != aOther.mPointer;
|
|
|
|
}
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2013-04-25 19:42:44 +04:00
|
|
|
#ifdef DEBUG_CC_GRAPH
|
2014-05-05 21:30:39 +04:00
|
|
|
bool Initialized() const { return mPointer != nullptr; }
|
2013-04-25 19:42:44 +04:00
|
|
|
#endif
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
private:
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfoOrBlock* mPointer;
|
2014-05-05 21:30:39 +04:00
|
|
|
};
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
class Builder;
|
|
|
|
friend class Builder;
|
2014-05-13 21:41:38 +04:00
|
|
|
class Builder {
|
2014-05-05 21:30:39 +04:00
|
|
|
public:
|
2014-07-28 21:19:06 +04:00
|
|
|
explicit Builder(EdgePool& aPool)
|
2014-05-13 21:41:38 +04:00
|
|
|
: mCurrent(&aPool.mSentinelAndBlocks[0]),
|
|
|
|
mBlockEnd(&aPool.mSentinelAndBlocks[0]),
|
2016-06-01 02:18:33 +03:00
|
|
|
mNextBlockPtr(&aPool.EdgeBlocks()) {}
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
Iterator Mark() { return Iterator(mCurrent); }
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void Add(PtrInfo* aEdge) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mCurrent == mBlockEnd) {
|
2016-06-01 02:18:33 +03:00
|
|
|
EdgeBlock* b = new EdgeBlock();
|
2014-05-05 21:30:39 +04:00
|
|
|
*mNextBlockPtr = b;
|
|
|
|
mCurrent = b->Start();
|
|
|
|
mBlockEnd = b->End();
|
|
|
|
mNextBlockPtr = &b->Next();
|
|
|
|
}
|
|
|
|
(mCurrent++)->ptrInfo = aEdge;
|
|
|
|
}
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
private:
|
|
|
|
// mBlockEnd points to space for null sentinel
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfoOrBlock* mCurrent;
|
|
|
|
PtrInfoOrBlock* mBlockEnd;
|
2016-06-01 02:18:33 +03:00
|
|
|
EdgeBlock** mNextBlockPtr;
|
2014-05-05 21:30:39 +04:00
|
|
|
};
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
|
2014-05-05 21:30:39 +04:00
|
|
|
size_t n = 0;
|
2016-06-01 02:18:33 +03:00
|
|
|
EdgeBlock* b = EdgeBlocks();
|
2014-05-05 21:30:39 +04:00
|
|
|
while (b) {
|
|
|
|
n += aMallocSizeOf(b);
|
|
|
|
b = b->Next();
|
2011-07-09 02:49:31 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
return n;
|
|
|
|
}
|
2007-04-26 01:12:11 +04:00
|
|
|
};
|
|
|
|
|
2013-04-25 19:42:44 +04:00
|
|
|
#ifdef DEBUG_CC_GRAPH
|
|
|
|
# define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
|
|
|
|
#else
|
|
|
|
# define CC_GRAPH_ASSERT(b)
|
|
|
|
#endif
|
|
|
|
|
2013-09-10 19:29:45 +04:00
|
|
|
#define CC_TELEMETRY(_name, _value) \
|
2017-04-06 04:06:44 +03:00
|
|
|
do { \
|
2013-09-10 19:29:45 +04:00
|
|
|
if (NS_IsMainThread()) { \
|
|
|
|
Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value); \
|
|
|
|
} else { \
|
|
|
|
Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
|
|
|
|
} \
|
2017-04-06 04:06:44 +03:00
|
|
|
} while (0)
|
2013-09-10 19:29:45 +04:00
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
enum NodeColor { black, white, grey };
|
|
|
|
|
2007-04-20 12:01:01 +04:00
|
|
|
// This structure should be kept as small as possible; we may expect
|
2011-06-18 03:19:41 +04:00
|
|
|
// hundreds of thousands of them to be allocated and touched
|
|
|
|
// repeatedly during each cycle collection.
|
2017-09-15 22:35:18 +03:00
|
|
|
class PtrInfo final {
|
|
|
|
public:
|
2019-06-11 18:13:17 +03:00
|
|
|
// mParticipant knows a more concrete type.
|
2014-05-13 21:41:38 +04:00
|
|
|
void* mPointer;
|
|
|
|
nsCycleCollectionParticipant* mParticipant;
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t mColor : 2;
|
|
|
|
uint32_t mInternalRefs : 30;
|
|
|
|
uint32_t mRefCount;
|
2017-09-15 22:35:18 +03:00
|
|
|
|
2011-06-10 01:55:04 +04:00
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
EdgePool::Iterator mFirstChild;
|
2007-04-20 12:01:01 +04:00
|
|
|
|
2014-07-01 03:18:47 +04:00
|
|
|
static const uint32_t kInitialRefCount = UINT32_MAX - 1;
|
|
|
|
|
2011-06-10 01:55:04 +04:00
|
|
|
public:
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo(void* aPointer, nsCycleCollectionParticipant* aParticipant)
|
2014-05-05 21:30:39 +04:00
|
|
|
: mPointer(aPointer),
|
|
|
|
mParticipant(aParticipant),
|
|
|
|
mColor(grey),
|
|
|
|
mInternalRefs(0),
|
2014-07-01 03:18:47 +04:00
|
|
|
mRefCount(kInitialRefCount),
|
2014-05-05 21:30:39 +04:00
|
|
|
mFirstChild() {
|
2014-05-14 20:45:50 +04:00
|
|
|
MOZ_ASSERT(aParticipant);
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We initialize mRefCount to a large non-zero value so
|
|
|
|
// that it doesn't look like a JS object to the cycle collector
|
|
|
|
// in the case where the object dies before being traversed.
|
2014-05-14 20:45:50 +04:00
|
|
|
MOZ_ASSERT(!IsGrayJS() && !IsBlackJS());
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2007-04-20 12:01:01 +04:00
|
|
|
|
2016-06-01 02:18:33 +03:00
|
|
|
// Allow NodePool::NodeBlock's constructor to compile.
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo()
|
2018-06-13 21:57:56 +03:00
|
|
|
: mPointer{nullptr},
|
|
|
|
mParticipant{nullptr},
|
|
|
|
mColor{0},
|
|
|
|
mInternalRefs{0},
|
|
|
|
mRefCount{0} {
|
2018-06-18 08:43:11 +03:00
|
|
|
MOZ_ASSERT_UNREACHABLE("should never be called");
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2011-06-10 01:55:04 +04:00
|
|
|
|
2014-05-14 20:45:50 +04:00
|
|
|
bool IsGrayJS() const { return mRefCount == 0; }
|
|
|
|
|
|
|
|
bool IsBlackJS() const { return mRefCount == UINT32_MAX; }
|
|
|
|
|
2014-07-01 03:18:47 +04:00
|
|
|
bool WasTraversed() const { return mRefCount != kInitialRefCount; }
|
|
|
|
|
2014-05-14 20:45:50 +04:00
|
|
|
EdgePool::Iterator FirstChild() const {
|
2014-05-05 21:30:39 +04:00
|
|
|
CC_GRAPH_ASSERT(mFirstChild.Initialized());
|
|
|
|
return mFirstChild;
|
|
|
|
}
|
2011-06-10 01:55:04 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// this PtrInfo must be part of a NodePool
|
2014-05-14 20:45:50 +04:00
|
|
|
EdgePool::Iterator LastChild() const {
|
2014-05-05 21:30:39 +04:00
|
|
|
CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized());
|
|
|
|
return (this + 1)->mFirstChild;
|
|
|
|
}
|
2011-06-10 01:55:04 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void SetFirstChild(EdgePool::Iterator aFirstChild) {
|
|
|
|
CC_GRAPH_ASSERT(aFirstChild.Initialized());
|
|
|
|
mFirstChild = aFirstChild;
|
|
|
|
}
|
2011-06-10 01:55:04 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// this PtrInfo must be part of a NodePool
|
|
|
|
void SetLastChild(EdgePool::Iterator aLastChild) {
|
|
|
|
CC_GRAPH_ASSERT(aLastChild.Initialized());
|
|
|
|
(this + 1)->mFirstChild = aLastChild;
|
|
|
|
}
|
2017-09-15 23:00:17 +03:00
|
|
|
|
|
|
|
void AnnotatedReleaseAssert(bool aCondition, const char* aMessage);
|
2007-04-26 01:12:11 +04:00
|
|
|
};
|
2007-04-20 12:01:01 +04:00
|
|
|
|
2017-09-15 23:00:17 +03:00
|
|
|
void PtrInfo::AnnotatedReleaseAssert(bool aCondition, const char* aMessage) {
|
|
|
|
if (aCondition) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
const char* piName = "Unknown";
|
|
|
|
if (mParticipant) {
|
|
|
|
piName = mParticipant->ClassName();
|
|
|
|
}
|
|
|
|
nsPrintfCString msg("%s, for class %s", aMessage, piName);
|
Bug 1348273 - Convert crash annotations into a machine-readable list of constants; r=ted.mielczarek,njn,dholbert,mak,cpearce,mcmanus,froydnj,Dexter,jrmuizel,jchen,jimm,bz,surkov
This introduces the machinery needed to generate crash annotations from a YAML
file. The relevant C++ functions are updated to take a typed enum. JavaScript
calls are unaffected but they will throw if the string argument does not
correspond to one of the known entries in the C++ enum. The existing whitelists
and blacklists of annotations are also generated from the YAML file and all
duplicate code related to them has been consolidated. Once written out to the
.extra file the annotations are converted in string form and are no different
than the existing ones.
All existing annotations have been included in the list (and some obsolete ones
have been removed) and all call sites have been updated including tests where
appropriate.
--HG--
extra : source : 4f6c43f2830701ec5552e08e3f1b06fe6d045860
2018-07-05 16:42:11 +03:00
|
|
|
CrashReporter::AnnotateCrashReport(CrashReporter::Annotation::CycleCollector,
|
|
|
|
msg);
|
2017-09-15 23:00:17 +03:00
|
|
|
|
|
|
|
MOZ_CRASH();
|
|
|
|
}
|
|
|
|
|
2007-04-26 01:12:11 +04:00
|
|
|
/**
|
|
|
|
* A structure designed to be used like a linked list of PtrInfo, except
|
2016-06-01 02:13:21 +03:00
|
|
|
* it allocates many PtrInfos at a time.
|
2007-04-26 01:12:11 +04:00
|
|
|
*/
|
|
|
|
class NodePool {
|
|
|
|
private:
|
2016-06-01 02:18:33 +03:00
|
|
|
// The -2 allows us to use |NodeBlockSize + 1| for |mEntries|, and fit
|
|
|
|
// |mNext|, all without causing slop.
|
|
|
|
enum { NodeBlockSize = 4 * 1024 - 2 };
|
2007-04-20 12:01:01 +04:00
|
|
|
|
2016-06-01 02:18:33 +03:00
|
|
|
struct NodeBlock {
|
|
|
|
// We create and destroy NodeBlock using moz_xmalloc/free rather than new
|
|
|
|
// and delete to avoid calling its constructor and destructor.
|
2018-06-13 21:57:56 +03:00
|
|
|
NodeBlock() : mNext{nullptr} {
|
2018-06-18 08:43:11 +03:00
|
|
|
MOZ_ASSERT_UNREACHABLE("should never be called");
|
2014-05-14 07:42:27 +04:00
|
|
|
|
2016-06-01 02:18:33 +03:00
|
|
|
// Ensure NodeBlock is the right size (see the comment on NodeBlockSize
|
|
|
|
// above).
|
2014-05-14 07:42:27 +04:00
|
|
|
static_assert(
|
2016-06-01 02:18:33 +03:00
|
|
|
sizeof(NodeBlock) == 81904 || // 32-bit; equals 19.996 x 4 KiB pages
|
|
|
|
sizeof(NodeBlock) ==
|
|
|
|
131048, // 64-bit; equals 31.994 x 4 KiB pages
|
|
|
|
"ill-sized NodeBlock");
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2018-06-18 08:43:11 +03:00
|
|
|
~NodeBlock() { MOZ_ASSERT_UNREACHABLE("should never be called"); }
|
2008-03-18 02:11:08 +03:00
|
|
|
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock* mNext;
|
|
|
|
PtrInfo mEntries[NodeBlockSize + 1]; // +1 to store last child of last node
|
2014-05-05 21:30:39 +04:00
|
|
|
};
|
2007-04-26 01:12:11 +04:00
|
|
|
|
|
|
|
public:
|
2014-05-13 21:41:38 +04:00
|
|
|
NodePool() : mBlocks(nullptr), mLast(nullptr) {}
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
~NodePool() { MOZ_ASSERT(!mBlocks, "Didn't call Clear()?"); }
|
|
|
|
|
|
|
|
void Clear() {
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock* b = mBlocks;
|
2014-05-05 21:30:39 +04:00
|
|
|
while (b) {
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock* n = b->mNext;
|
2015-04-01 08:29:55 +03:00
|
|
|
free(b);
|
2014-05-05 21:30:39 +04:00
|
|
|
b = n;
|
2008-01-10 17:10:03 +03:00
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mBlocks = nullptr;
|
|
|
|
mLast = nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef DEBUG
|
|
|
|
bool IsEmpty() { return !mBlocks && !mLast; }
|
|
|
|
#endif
|
2008-01-10 17:10:03 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
class Builder;
|
|
|
|
friend class Builder;
|
2014-05-13 21:41:38 +04:00
|
|
|
class Builder {
|
2014-05-05 21:30:39 +04:00
|
|
|
public:
|
2014-07-28 21:19:06 +04:00
|
|
|
explicit Builder(NodePool& aPool)
|
2014-05-13 21:41:38 +04:00
|
|
|
: mNextBlock(&aPool.mBlocks), mNext(aPool.mLast), mBlockEnd(nullptr) {
|
|
|
|
MOZ_ASSERT(!aPool.mBlocks && !aPool.mLast, "pool not empty");
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* Add(void* aPointer, nsCycleCollectionParticipant* aParticipant) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mNext == mBlockEnd) {
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock* block = static_cast<NodeBlock*>(malloc(sizeof(NodeBlock)));
|
2016-05-05 23:54:18 +03:00
|
|
|
if (!block) {
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
*mNextBlock = block;
|
|
|
|
mNext = block->mEntries;
|
2016-06-01 02:18:33 +03:00
|
|
|
mBlockEnd = block->mEntries + NodeBlockSize;
|
2014-05-05 21:30:39 +04:00
|
|
|
block->mNext = nullptr;
|
|
|
|
mNextBlock = &block->mNext;
|
|
|
|
}
|
2016-08-23 01:40:10 +03:00
|
|
|
return new (mozilla::KnownNotNull, mNext++)
|
|
|
|
PtrInfo(aPointer, aParticipant);
|
2007-04-20 12:01:01 +04:00
|
|
|
}
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
private:
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock** mNextBlock;
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo*& mNext;
|
|
|
|
PtrInfo* mBlockEnd;
|
2014-05-05 21:30:39 +04:00
|
|
|
};
|
2007-04-20 12:01:01 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
class Enumerator;
|
|
|
|
friend class Enumerator;
|
2014-05-13 21:41:38 +04:00
|
|
|
class Enumerator {
|
2014-05-05 21:30:39 +04:00
|
|
|
public:
|
2014-07-28 21:19:06 +04:00
|
|
|
explicit Enumerator(NodePool& aPool)
|
2014-05-13 21:41:38 +04:00
|
|
|
: mFirstBlock(aPool.mBlocks),
|
|
|
|
mCurBlock(nullptr),
|
|
|
|
mNext(nullptr),
|
|
|
|
mBlockEnd(nullptr),
|
|
|
|
mLast(aPool.mLast) {}
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
bool IsDone() const { return mNext == mLast; }
|
2007-04-20 12:01:01 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
bool AtBlockEnd() const { return mNext == mBlockEnd; }
|
2007-04-20 12:01:01 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
PtrInfo* GetNext() {
|
|
|
|
MOZ_ASSERT(!IsDone(), "calling GetNext when done");
|
|
|
|
if (mNext == mBlockEnd) {
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock* nextBlock = mCurBlock ? mCurBlock->mNext : mFirstBlock;
|
2014-05-05 21:30:39 +04:00
|
|
|
mNext = nextBlock->mEntries;
|
2016-06-01 02:18:33 +03:00
|
|
|
mBlockEnd = mNext + NodeBlockSize;
|
2014-05-05 21:30:39 +04:00
|
|
|
mCurBlock = nextBlock;
|
|
|
|
}
|
|
|
|
return mNext++;
|
|
|
|
}
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
private:
|
|
|
|
// mFirstBlock is a reference to allow an Enumerator to be constructed
|
|
|
|
// for an empty graph.
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock*& mFirstBlock;
|
|
|
|
NodeBlock* mCurBlock;
|
2014-05-05 21:30:39 +04:00
|
|
|
// mNext is the next value we want to return, unless mNext == mBlockEnd
|
|
|
|
// NB: mLast is a reference to allow enumerating while building!
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* mNext;
|
|
|
|
PtrInfo* mBlockEnd;
|
|
|
|
PtrInfo*& mLast;
|
2014-05-05 21:30:39 +04:00
|
|
|
};
|
2011-06-10 01:55:29 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
|
2014-05-05 21:30:39 +04:00
|
|
|
// We don't measure the things pointed to by mEntries[] because those
|
|
|
|
// pointers are non-owning.
|
|
|
|
size_t n = 0;
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock* b = mBlocks;
|
2014-05-05 21:30:39 +04:00
|
|
|
while (b) {
|
|
|
|
n += aMallocSizeOf(b);
|
|
|
|
b = b->mNext;
|
2011-07-09 02:49:31 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
return n;
|
|
|
|
}
|
2011-07-09 02:49:31 +04:00
|
|
|
|
2007-04-26 01:12:11 +04:00
|
|
|
private:
|
2016-06-01 02:18:33 +03:00
|
|
|
NodeBlock* mBlocks;
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* mLast;
|
2007-01-05 01:31:26 +03:00
|
|
|
};
|
|
|
|
|
2018-08-14 02:25:51 +03:00
|
|
|
struct PtrToNodeHashPolicy {
|
|
|
|
using Key = PtrInfo*;
|
|
|
|
using Lookup = void*;
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2018-08-14 02:25:51 +03:00
|
|
|
static js::HashNumber hash(const Lookup& aLookup) {
|
|
|
|
return mozilla::HashGeneric(aLookup);
|
|
|
|
}
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2018-08-14 02:25:51 +03:00
|
|
|
static bool match(const Key& aKey, const Lookup& aLookup) {
|
|
|
|
return aKey->mPointer == aLookup;
|
|
|
|
}
|
2013-11-21 02:35:17 +04:00
|
|
|
};
|
|
|
|
|
2011-11-24 16:35:56 +04:00
|
|
|
struct WeakMapping {
|
2014-05-05 21:30:39 +04:00
|
|
|
// map and key will be null if the corresponding objects are GC marked
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* mMap;
|
|
|
|
PtrInfo* mKey;
|
|
|
|
PtrInfo* mKeyDelegate;
|
|
|
|
PtrInfo* mVal;
|
2011-11-24 16:35:56 +04:00
|
|
|
};
|
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
class CCGraphBuilder;
|
2007-06-19 05:29:10 +04:00
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
struct CCGraph {
|
2014-05-05 21:30:39 +04:00
|
|
|
NodePool mNodes;
|
|
|
|
EdgePool mEdges;
|
|
|
|
nsTArray<WeakMapping> mWeakMaps;
|
|
|
|
uint32_t mRootCount;
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2013-11-21 02:35:17 +04:00
|
|
|
private:
|
2018-08-14 02:25:51 +03:00
|
|
|
friend CCGraphBuilder;
|
|
|
|
|
|
|
|
mozilla::HashSet<PtrInfo*, PtrToNodeHashPolicy> mPtrInfoMap;
|
|
|
|
|
2015-04-09 23:00:00 +03:00
|
|
|
bool mOutOfMemory;
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2015-05-19 07:29:58 +03:00
|
|
|
static const uint32_t kInitialMapLength = 16384;
|
|
|
|
|
2013-11-21 02:35:17 +04:00
|
|
|
public:
|
2015-05-19 07:29:58 +03:00
|
|
|
CCGraph()
|
2018-08-14 02:25:51 +03:00
|
|
|
: mRootCount(0), mPtrInfoMap(kInitialMapLength), mOutOfMemory(false) {}
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2015-05-19 07:29:58 +03:00
|
|
|
~CCGraph() {}
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
void Init() { MOZ_ASSERT(IsEmpty(), "Failed to call CCGraph::Clear"); }
|
2011-07-09 02:49:31 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void Clear() {
|
|
|
|
mNodes.Clear();
|
|
|
|
mEdges.Clear();
|
|
|
|
mWeakMaps.Clear();
|
|
|
|
mRootCount = 0;
|
2018-08-14 02:25:51 +03:00
|
|
|
mPtrInfoMap.clearAndCompact();
|
2015-04-09 23:00:00 +03:00
|
|
|
mOutOfMemory = false;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2013-12-06 22:17:20 +04:00
|
|
|
#ifdef DEBUG
|
2014-05-05 21:30:39 +04:00
|
|
|
bool IsEmpty() {
|
|
|
|
return mNodes.IsEmpty() && mEdges.IsEmpty() && mWeakMaps.IsEmpty() &&
|
|
|
|
mRootCount == 0 && mPtrInfoMap.empty();
|
|
|
|
}
|
2013-12-06 22:17:20 +04:00
|
|
|
#endif
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* FindNode(void* aPtr);
|
2015-09-25 20:43:21 +03:00
|
|
|
void RemoveObjectFromMap(void* aObject);
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2018-08-14 02:25:51 +03:00
|
|
|
uint32_t MapCount() const { return mPtrInfoMap.count(); }
|
2013-09-10 19:56:36 +04:00
|
|
|
|
2015-09-04 19:45:44 +03:00
|
|
|
size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
|
|
|
|
size_t n = 0;
|
|
|
|
|
|
|
|
n += mNodes.SizeOfExcludingThis(aMallocSizeOf);
|
|
|
|
n += mEdges.SizeOfExcludingThis(aMallocSizeOf);
|
2011-07-09 02:49:31 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We don't measure what the WeakMappings point to, because the
|
|
|
|
// pointers are non-owning.
|
2015-09-04 19:45:44 +03:00
|
|
|
n += mWeakMaps.ShallowSizeOfExcludingThis(aMallocSizeOf);
|
|
|
|
|
2018-08-14 02:25:51 +03:00
|
|
|
n += mPtrInfoMap.shallowSizeOfExcludingThis(aMallocSizeOf);
|
2015-09-04 19:45:44 +03:00
|
|
|
|
2015-09-04 19:45:44 +03:00
|
|
|
return n;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2015-09-25 20:43:21 +03:00
|
|
|
};
|
2015-09-08 07:15:32 +03:00
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
PtrInfo* CCGraph::FindNode(void* aPtr) {
|
2018-08-14 02:25:51 +03:00
|
|
|
auto p = mPtrInfoMap.lookup(aPtr);
|
|
|
|
return p ? *p : nullptr;
|
2013-11-21 02:35:17 +04:00
|
|
|
}
|
|
|
|
|
2015-09-25 20:43:21 +03:00
|
|
|
void CCGraph::RemoveObjectFromMap(void* aObj) {
|
2018-08-14 02:25:51 +03:00
|
|
|
auto p = mPtrInfoMap.lookup(aObj);
|
|
|
|
if (p) {
|
|
|
|
PtrInfo* pinfo = *p;
|
2015-09-25 20:43:21 +03:00
|
|
|
pinfo->mPointer = nullptr;
|
|
|
|
pinfo->mParticipant = nullptr;
|
2018-08-14 02:25:51 +03:00
|
|
|
mPtrInfoMap.remove(p);
|
2015-09-25 20:43:21 +03:00
|
|
|
}
|
2013-12-06 22:17:20 +04:00
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
static nsISupports* CanonicalizeXPCOMParticipant(nsISupports* aIn) {
|
2015-02-25 20:44:10 +03:00
|
|
|
nsISupports* out = nullptr;
|
2014-05-13 21:41:38 +04:00
|
|
|
aIn->QueryInterface(NS_GET_IID(nsCycleCollectionISupports),
|
|
|
|
reinterpret_cast<void**>(&out));
|
2014-05-05 21:30:39 +04:00
|
|
|
return out;
|
2012-08-24 20:50:06 +04:00
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
struct nsPurpleBufferEntry {
|
2017-05-05 00:49:22 +03:00
|
|
|
nsPurpleBufferEntry(void* aObject, nsCycleCollectingAutoRefCnt* aRefCnt,
|
|
|
|
nsCycleCollectionParticipant* aParticipant)
|
|
|
|
: mObject(aObject), mRefCnt(aRefCnt), mParticipant(aParticipant) {}
|
2013-11-26 23:29:59 +04:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
nsPurpleBufferEntry(nsPurpleBufferEntry&& aOther)
|
|
|
|
: mObject(nullptr), mRefCnt(nullptr), mParticipant(nullptr) {
|
|
|
|
Swap(aOther);
|
|
|
|
}
|
|
|
|
|
|
|
|
void Swap(nsPurpleBufferEntry& aOther) {
|
|
|
|
std::swap(mObject, aOther.mObject);
|
|
|
|
std::swap(mRefCnt, aOther.mRefCnt);
|
|
|
|
std::swap(mParticipant, aOther.mParticipant);
|
|
|
|
}
|
|
|
|
|
|
|
|
void Clear() {
|
|
|
|
mRefCnt->RemoveFromPurpleBuffer();
|
|
|
|
mRefCnt = nullptr;
|
|
|
|
mObject = nullptr;
|
|
|
|
mParticipant = nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
~nsPurpleBufferEntry() {
|
|
|
|
if (mRefCnt) {
|
|
|
|
mRefCnt->RemoveFromPurpleBuffer();
|
|
|
|
}
|
|
|
|
}
|
2013-11-26 23:29:59 +04:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
void* mObject;
|
|
|
|
nsCycleCollectingAutoRefCnt* mRefCnt;
|
2014-05-13 21:41:38 +04:00
|
|
|
nsCycleCollectionParticipant* mParticipant; // nullptr for nsISupports
|
2013-11-26 23:29:59 +04:00
|
|
|
};
|
|
|
|
|
2013-07-27 14:48:45 +04:00
|
|
|
class nsCycleCollector;
|
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
struct nsPurpleBuffer {
|
2009-05-07 00:46:04 +04:00
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t mCount;
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
// Try to match the size of a jemalloc bucket, to minimize slop bytes.
|
|
|
|
// - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries'
|
|
|
|
// Segment is 16,372 bytes.
|
|
|
|
// - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries'
|
|
|
|
// Segment is 32,760 bytes.
|
|
|
|
static const uint32_t kEntriesPerSegment = 1365;
|
|
|
|
static const size_t kSegmentSize =
|
|
|
|
sizeof(nsPurpleBufferEntry) * kEntriesPerSegment;
|
|
|
|
typedef SegmentedVector<nsPurpleBufferEntry, kSegmentSize,
|
|
|
|
InfallibleAllocPolicy>
|
|
|
|
PurpleBufferVector;
|
|
|
|
PurpleBufferVector mEntries;
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2013-04-30 21:41:23 +04:00
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
nsPurpleBuffer() : mCount(0) {
|
2017-05-05 00:49:22 +03:00
|
|
|
static_assert(
|
|
|
|
sizeof(PurpleBufferVector::Segment) == 16372 || // 32-bit
|
|
|
|
sizeof(PurpleBufferVector::Segment) == 32760 || // 64-bit
|
|
|
|
sizeof(PurpleBufferVector::Segment) == 32744, // 64-bit Windows
|
|
|
|
"ill-sized nsPurpleBuffer::mEntries");
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
~nsPurpleBuffer() {}
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
// This method compacts mEntries.
|
2014-08-25 23:17:15 +04:00
|
|
|
template <class PurpleVisitor>
|
2014-05-13 21:41:38 +04:00
|
|
|
void VisitEntries(PurpleVisitor& aVisitor) {
|
2017-08-22 00:01:47 +03:00
|
|
|
Maybe<AutoRestore<bool>> ar;
|
|
|
|
if (NS_IsMainThread()) {
|
|
|
|
ar.emplace(gNurseryPurpleBufferEnabled);
|
|
|
|
gNurseryPurpleBufferEnabled = false;
|
|
|
|
ClearNurseryPurpleBuffer();
|
|
|
|
}
|
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
if (mEntries.IsEmpty()) {
|
|
|
|
return;
|
2013-04-30 21:41:23 +04:00
|
|
|
}
|
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
uint32_t oldLength = mEntries.Length();
|
2017-09-20 13:15:19 +03:00
|
|
|
uint32_t keptLength = 0;
|
2017-05-05 00:49:22 +03:00
|
|
|
auto revIter = mEntries.IterFromLast();
|
|
|
|
auto iter = mEntries.Iter();
|
|
|
|
// After iteration this points to the first empty entry.
|
|
|
|
auto firstEmptyIter = mEntries.Iter();
|
|
|
|
auto iterFromLastEntry = mEntries.IterFromLast();
|
|
|
|
for (; !iter.Done(); iter.Next()) {
|
|
|
|
nsPurpleBufferEntry& e = iter.Get();
|
|
|
|
if (e.mObject) {
|
2017-06-30 13:44:59 +03:00
|
|
|
if (!aVisitor.Visit(*this, &e)) {
|
|
|
|
return;
|
|
|
|
}
|
2017-05-05 00:49:22 +03:00
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
// Visit call above may have cleared the entry, or the entry was empty
|
|
|
|
// already.
|
|
|
|
if (!e.mObject) {
|
|
|
|
// Try to find a non-empty entry from the end of the vector.
|
|
|
|
for (; !revIter.Done(); revIter.Prev()) {
|
|
|
|
nsPurpleBufferEntry& otherEntry = revIter.Get();
|
|
|
|
if (&e == &otherEntry) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (otherEntry.mObject) {
|
2017-06-30 13:44:59 +03:00
|
|
|
if (!aVisitor.Visit(*this, &otherEntry)) {
|
|
|
|
return;
|
|
|
|
}
|
2017-05-05 00:49:22 +03:00
|
|
|
// Visit may have cleared otherEntry.
|
|
|
|
if (otherEntry.mObject) {
|
|
|
|
e.Swap(otherEntry);
|
|
|
|
revIter.Prev(); // We've swapped this now empty entry.
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Entry is non-empty even after the Visit call, ensure it is kept
|
|
|
|
// in mEntries.
|
|
|
|
if (e.mObject) {
|
|
|
|
firstEmptyIter.Next();
|
2017-09-20 13:15:19 +03:00
|
|
|
++keptLength;
|
2017-05-05 00:49:22 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (&e == &revIter.Get()) {
|
|
|
|
break;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-04-30 21:41:23 +04:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
// There were some empty entries.
|
2017-09-20 13:15:19 +03:00
|
|
|
if (oldLength != keptLength) {
|
2017-05-05 00:49:22 +03:00
|
|
|
// While visiting entries, some new ones were possibly added. This can
|
|
|
|
// happen during CanSkip. Move all such new entries to be after other
|
|
|
|
// entries. Note, we don't call Visit on newly added entries!
|
|
|
|
if (&iterFromLastEntry.Get() != &mEntries.GetLast()) {
|
|
|
|
iterFromLastEntry.Next(); // Now pointing to the first added entry.
|
|
|
|
auto& iterForNewEntries = iterFromLastEntry;
|
|
|
|
while (!iterForNewEntries.Done()) {
|
|
|
|
MOZ_ASSERT(!firstEmptyIter.Done());
|
|
|
|
MOZ_ASSERT(!firstEmptyIter.Get().mObject);
|
|
|
|
firstEmptyIter.Get().Swap(iterForNewEntries.Get());
|
|
|
|
firstEmptyIter.Next();
|
|
|
|
iterForNewEntries.Next();
|
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2017-05-05 00:49:22 +03:00
|
|
|
|
2017-09-20 13:15:19 +03:00
|
|
|
mEntries.PopLastN(oldLength - keptLength);
|
2009-06-01 23:22:18 +04:00
|
|
|
}
|
2017-05-05 00:49:22 +03:00
|
|
|
}
|
2009-06-01 23:22:18 +04:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
void FreeBlocks() {
|
|
|
|
mCount = 0;
|
|
|
|
mEntries.Clear();
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
void SelectPointers(CCGraphBuilder& aBuilder);
|
2014-05-05 21:30:39 +04:00
|
|
|
|
|
|
|
// RemoveSkippable removes entries from the purple buffer synchronously
|
2018-11-29 00:06:09 +03:00
|
|
|
// (1) if !aAsyncSnowWhiteFreeing and nsPurpleBufferEntry::mRefCnt is 0 or
|
|
|
|
// (2) if nsXPCOMCycleCollectionParticipant::CanSkip() for the obj or
|
2014-05-05 21:30:39 +04:00
|
|
|
// (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
|
2019-06-11 18:13:17 +03:00
|
|
|
// (4) If aRemoveChildlessNodes is true, then any nodes in the purple buffer
|
2014-05-05 21:30:39 +04:00
|
|
|
// that will have no children in the cycle collector graph will also be
|
|
|
|
// removed. CanSkip() may be run on these children.
|
|
|
|
void RemoveSkippable(nsCycleCollector* aCollector, js::SliceBudget& aBudget,
|
|
|
|
bool aRemoveChildlessNodes, bool aAsyncSnowWhiteFreeing,
|
|
|
|
CC_ForgetSkippableCallback aCb);
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_ALWAYS_INLINE void Put(void* aObject, nsCycleCollectionParticipant* aCp,
|
|
|
|
nsCycleCollectingAutoRefCnt* aRefCnt) {
|
2017-05-05 00:49:22 +03:00
|
|
|
nsPurpleBufferEntry entry(aObject, aRefCnt, aCp);
|
2018-05-30 22:15:35 +03:00
|
|
|
Unused << mEntries.Append(std::move(entry));
|
2017-05-05 00:49:22 +03:00
|
|
|
MOZ_ASSERT(!entry.mRefCnt, "Move didn't work!");
|
2014-05-05 21:30:39 +04:00
|
|
|
++mCount;
|
|
|
|
}
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void Remove(nsPurpleBufferEntry* aEntry) {
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(mCount != 0, "must have entries");
|
|
|
|
--mCount;
|
2017-05-05 00:49:22 +03:00
|
|
|
aEntry->Clear();
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2012-11-07 05:38:29 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t Count() const { return mCount; }
|
2011-07-09 02:49:31 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
|
2017-05-05 00:49:22 +03:00
|
|
|
return mEntries.SizeOfExcludingThis(aMallocSizeOf);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2009-05-07 00:46:04 +04:00
|
|
|
};
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
static bool AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
|
2014-05-13 21:41:38 +04:00
|
|
|
nsCycleCollectionParticipant* aParti);
|
2012-08-24 20:50:06 +04:00
|
|
|
|
2013-04-30 21:41:23 +04:00
|
|
|
struct SelectPointersVisitor {
|
2014-07-28 21:19:06 +04:00
|
|
|
explicit SelectPointersVisitor(CCGraphBuilder& aBuilder)
|
2014-05-05 21:30:39 +04:00
|
|
|
: mBuilder(aBuilder) {}
|
2013-04-30 21:41:23 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
|
|
|
|
MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
|
|
|
|
"SelectPointersVisitor: snow-white object in the purple buffer");
|
|
|
|
if (!aEntry->mRefCnt->IsPurple() ||
|
|
|
|
AddPurpleRoot(mBuilder, aEntry->mObject, aEntry->mParticipant)) {
|
|
|
|
aBuffer.Remove(aEntry);
|
2009-05-07 00:46:04 +04:00
|
|
|
}
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2013-04-30 21:41:23 +04:00
|
|
|
private:
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder& mBuilder;
|
2013-04-30 21:41:23 +04:00
|
|
|
};
|
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
void nsPurpleBuffer::SelectPointers(CCGraphBuilder& aBuilder) {
|
2014-05-05 21:30:39 +04:00
|
|
|
SelectPointersVisitor visitor(aBuilder);
|
|
|
|
VisitEntries(visitor);
|
2013-04-30 21:41:23 +04:00
|
|
|
|
2017-05-05 00:49:22 +03:00
|
|
|
MOZ_ASSERT(mCount == 0, "AddPurpleRoot failed");
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mCount == 0) {
|
|
|
|
FreeBlocks();
|
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
enum ccPhase {
|
2014-05-05 21:30:39 +04:00
|
|
|
IdlePhase,
|
|
|
|
GraphBuildingPhase,
|
|
|
|
ScanAndCollectWhitePhase,
|
|
|
|
CleanupPhase
|
2013-12-03 22:47:46 +04:00
|
|
|
};
|
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
enum ccType {
|
2018-11-29 00:06:09 +03:00
|
|
|
SliceCC, /* If a CC is in progress, continue it.
|
|
|
|
Otherwise, start a new one. */
|
2014-05-05 21:30:39 +04:00
|
|
|
ManualCC, /* Explicitly triggered. */
|
|
|
|
ShutdownCC /* Shutdown CC, used for finding leaks. */
|
2013-04-30 03:41:41 +04:00
|
|
|
};
|
|
|
|
|
2007-03-27 13:49:06 +04:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
2012-05-03 23:28:11 +04:00
|
|
|
// Top level structure for the cycle collector.
|
2007-03-27 13:49:06 +04:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2015-08-11 18:42:24 +03:00
|
|
|
using js::SliceBudget;
|
2013-12-03 22:47:47 +04:00
|
|
|
|
2014-01-15 00:23:59 +04:00
|
|
|
class JSPurpleBuffer;
|
|
|
|
|
2013-12-08 09:39:47 +04:00
|
|
|
class nsCycleCollector : public nsIMemoryReporter {
|
2015-09-25 20:43:21 +03:00
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
NS_DECL_ISUPPORTS
|
|
|
|
NS_DECL_NSIMEMORYREPORTER
|
2013-11-26 03:57:53 +04:00
|
|
|
|
2015-09-25 20:43:21 +03:00
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
bool mActivelyCollecting;
|
|
|
|
bool mFreeingSnowWhite;
|
|
|
|
// mScanInProgress should be false when we're collecting white objects.
|
|
|
|
bool mScanInProgress;
|
|
|
|
CycleCollectorResults mResults;
|
|
|
|
TimeStamp mCollectionStart;
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
CycleCollectedJSRuntime* mCCJSRuntime;
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
ccPhase mIncrementalPhase;
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraph mGraph;
|
|
|
|
nsAutoPtr<CCGraphBuilder> mBuilder;
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<nsCycleCollectorLogger> mLogger;
|
2008-01-10 17:10:03 +03:00
|
|
|
|
2016-02-26 18:52:08 +03:00
|
|
|
#ifdef DEBUG
|
2017-06-01 23:44:20 +03:00
|
|
|
nsISerialEventTarget* mEventTarget;
|
2016-02-26 18:52:08 +03:00
|
|
|
#endif
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
nsCycleCollectorParams mParams;
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t mWhiteNodeCount;
|
2008-02-15 16:12:55 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
CC_BeforeUnlinkCallback mBeforeUnlinkCB;
|
|
|
|
CC_ForgetSkippableCallback mForgetSkippableCB;
|
2012-01-14 20:58:05 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
nsPurpleBuffer mPurpleBuf;
|
2012-04-25 19:10:09 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t mUnmergedNeeded;
|
|
|
|
uint32_t mMergedInARow;
|
2013-04-30 03:41:41 +04:00
|
|
|
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<JSPurpleBuffer> mJSPurpleBuffer;
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-07-01 02:11:53 +04:00
|
|
|
private:
|
|
|
|
virtual ~nsCycleCollector();
|
|
|
|
|
2013-03-26 01:26:00 +04:00
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
nsCycleCollector();
|
2013-09-10 19:56:35 +04:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
void SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime);
|
|
|
|
void ClearCCJSRuntime();
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB) {
|
|
|
|
CheckThreadSafety();
|
|
|
|
mBeforeUnlinkCB = aBeforeUnlinkCB;
|
|
|
|
}
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void SetForgetSkippableCallback(
|
|
|
|
CC_ForgetSkippableCallback aForgetSkippableCB) {
|
|
|
|
CheckThreadSafety();
|
|
|
|
mForgetSkippableCB = aForgetSkippableCB;
|
|
|
|
}
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void Suspect(void* aPtr, nsCycleCollectionParticipant* aCp,
|
|
|
|
nsCycleCollectingAutoRefCnt* aRefCnt);
|
2017-08-22 00:01:47 +03:00
|
|
|
void SuspectNurseryEntries();
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t SuspectedCount();
|
2017-06-30 13:44:59 +03:00
|
|
|
void ForgetSkippable(js::SliceBudget& aBudget, bool aRemoveChildlessNodes,
|
|
|
|
bool aAsyncSnowWhiteFreeing);
|
2014-05-05 21:30:39 +04:00
|
|
|
bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer);
|
2018-08-08 09:14:58 +03:00
|
|
|
bool FreeSnowWhiteWithBudget(js::SliceBudget& aBudget);
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// This method assumes its argument is already canonicalized.
|
2014-05-13 21:41:38 +04:00
|
|
|
void RemoveObjectFromGraph(void* aPtr);
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void PrepareForGarbageCollection();
|
2014-05-07 04:25:26 +04:00
|
|
|
void FinishAnyCurrentCollection();
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
bool Collect(ccType aCCType, SliceBudget& aBudget,
|
2014-11-27 14:47:51 +03:00
|
|
|
nsICycleCollectorListener* aManualListener,
|
|
|
|
bool aPreferShorterSlices = false);
|
2016-09-08 23:04:30 +03:00
|
|
|
void Shutdown(bool aDoCollect);
|
2013-09-10 19:56:35 +04:00
|
|
|
|
2015-09-25 20:43:21 +03:00
|
|
|
bool IsIdle() const { return mIncrementalPhase == IdlePhase; }
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
|
2014-05-13 21:41:38 +04:00
|
|
|
size_t* aObjectSize, size_t* aGraphSize,
|
|
|
|
size_t* aPurpleBufferSize) const;
|
2013-09-10 19:56:35 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
JSPurpleBuffer* GetJSPurpleBuffer();
|
2016-02-07 20:08:55 +03:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
CycleCollectedJSRuntime* Runtime() { return mCCJSRuntime; }
|
2016-02-07 20:08:55 +03:00
|
|
|
|
2013-09-10 19:56:35 +04:00
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
void CheckThreadSafety();
|
|
|
|
void ShutdownCollect();
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2015-05-15 20:33:09 +03:00
|
|
|
void FixGrayBits(bool aForceGC, TimeLog& aTimeLog);
|
2015-05-15 20:33:09 +03:00
|
|
|
bool IsIncrementalGCInProgress();
|
|
|
|
void FinishAnyIncrementalGCInProgress();
|
2014-05-05 21:30:39 +04:00
|
|
|
bool ShouldMergeZones(ccType aCCType);
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void BeginCollection(ccType aCCType,
|
|
|
|
nsICycleCollectorListener* aManualListener);
|
|
|
|
void MarkRoots(SliceBudget& aBudget);
|
2014-05-05 21:30:39 +04:00
|
|
|
void ScanRoots(bool aFullySynchGraphBuild);
|
|
|
|
void ScanIncrementalRoots();
|
2014-05-08 22:28:03 +04:00
|
|
|
void ScanWhiteNodes(bool aFullySynchGraphBuild);
|
|
|
|
void ScanBlackNodes();
|
2014-05-05 21:30:39 +04:00
|
|
|
void ScanWeakMaps();
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// returns whether anything was collected
|
|
|
|
bool CollectWhite();
|
2013-07-09 21:30:58 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void CleanupAfterCollection();
|
2007-01-05 01:31:26 +03:00
|
|
|
};
|
|
|
|
|
2014-04-27 11:06:00 +04:00
|
|
|
NS_IMPL_ISUPPORTS(nsCycleCollector, nsIMemoryReporter)
|
2013-11-26 03:57:53 +04:00
|
|
|
|
2010-01-12 19:51:39 +03:00
|
|
|
/**
|
|
|
|
* GraphWalker is templatized over a Visitor class that must provide
|
|
|
|
* the following two methods:
|
|
|
|
*
|
2011-09-29 10:19:26 +04:00
|
|
|
* bool ShouldVisitNode(PtrInfo const *pi);
|
2010-01-12 19:51:39 +03:00
|
|
|
* void VisitNode(PtrInfo *pi);
|
|
|
|
*/
|
2014-08-25 23:17:15 +04:00
|
|
|
template <class Visitor>
|
2007-04-26 01:12:11 +04:00
|
|
|
class GraphWalker {
|
2007-04-20 12:01:01 +04:00
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
Visitor mVisitor;
|
2009-10-27 15:38:18 +03:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void DoWalk(nsDeque& aQueue);
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void CheckedPush(nsDeque& aQueue, PtrInfo* aPi) {
|
|
|
|
if (!aPi) {
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_CRASH();
|
2013-05-29 04:42:13 +04:00
|
|
|
}
|
2015-01-28 12:00:40 +03:00
|
|
|
if (!aQueue.Push(aPi, fallible)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
mVisitor.Failed();
|
|
|
|
}
|
|
|
|
}
|
2013-05-29 04:42:13 +04:00
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
public:
|
2014-05-13 21:41:38 +04:00
|
|
|
void Walk(PtrInfo* aPi);
|
2014-07-09 23:31:00 +04:00
|
|
|
void WalkFromRoots(CCGraph& aGraph);
|
2014-05-05 21:30:39 +04:00
|
|
|
// copy-constructing the visitor should be cheap, and less
|
|
|
|
// indirection than using a reference
|
2014-07-28 21:19:06 +04:00
|
|
|
explicit GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor) {}
|
2007-01-05 01:31:26 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
2013-06-18 23:02:16 +04:00
|
|
|
// The static collector struct
|
2007-01-05 01:31:26 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
struct CollectorData {
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<nsCycleCollector> mCollector;
|
2016-09-14 16:47:32 +03:00
|
|
|
CycleCollectedJSContext* mContext;
|
2013-06-18 23:02:16 +04:00
|
|
|
};
|
|
|
|
|
2015-11-23 22:11:22 +03:00
|
|
|
static MOZ_THREAD_LOCAL(CollectorData*) sCollectorData;
|
2007-01-05 01:31:26 +03:00
|
|
|
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// Utility functions
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
static inline void ToParticipant(nsISupports* aPtr,
|
|
|
|
nsXPCOMCycleCollectionParticipant** aCp) {
|
2014-05-05 21:30:39 +04:00
|
|
|
// We use QI to move from an nsISupports to an
|
|
|
|
// nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
|
|
|
|
// object that implements traversal and unlinking logic for the nsISupports
|
|
|
|
// in question.
|
2015-02-25 20:44:10 +03:00
|
|
|
*aCp = nullptr;
|
2014-05-13 21:41:38 +04:00
|
|
|
CallQueryInterface(aPtr, aCp);
|
2007-06-07 02:04:26 +04:00
|
|
|
}
|
|
|
|
|
2017-07-29 01:24:17 +03:00
|
|
|
static void ToParticipant(void* aParti, nsCycleCollectionParticipant** aCp) {
|
|
|
|
// If the participant is null, this is an nsISupports participant,
|
|
|
|
// so we must QI to get the real participant.
|
|
|
|
|
|
|
|
if (!*aCp) {
|
|
|
|
nsISupports* nsparti = static_cast<nsISupports*>(aParti);
|
|
|
|
MOZ_ASSERT(CanonicalizeXPCOMParticipant(nsparti) == nsparti);
|
|
|
|
nsXPCOMCycleCollectionParticipant* xcp;
|
|
|
|
ToParticipant(nsparti, &xcp);
|
|
|
|
*aCp = xcp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
template <class Visitor>
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_NEVER_INLINE void GraphWalker<Visitor>::Walk(PtrInfo* aPi) {
|
2014-05-05 21:30:39 +04:00
|
|
|
nsDeque queue;
|
2014-05-13 21:41:38 +04:00
|
|
|
CheckedPush(queue, aPi);
|
2014-05-05 21:30:39 +04:00
|
|
|
DoWalk(queue);
|
2007-04-26 01:12:11 +04:00
|
|
|
}
|
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
template <class Visitor>
|
2014-07-09 23:31:00 +04:00
|
|
|
MOZ_NEVER_INLINE void GraphWalker<Visitor>::WalkFromRoots(CCGraph& aGraph) {
|
2014-05-05 21:30:39 +04:00
|
|
|
nsDeque queue;
|
|
|
|
NodePool::Enumerator etor(aGraph.mNodes);
|
|
|
|
for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
|
|
|
|
CheckedPush(queue, etor.GetNext());
|
|
|
|
}
|
|
|
|
DoWalk(queue);
|
2007-04-26 01:12:11 +04:00
|
|
|
}
|
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
template <class Visitor>
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_NEVER_INLINE void GraphWalker<Visitor>::DoWalk(nsDeque& aQueue) {
|
2014-05-05 21:30:39 +04:00
|
|
|
// Use a aQueue to match the breadth-first traversal used when we
|
|
|
|
// built the graph, for hopefully-better locality.
|
|
|
|
while (aQueue.GetSize() > 0) {
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* pi = static_cast<PtrInfo*>(aQueue.PopFront());
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-07-01 03:18:47 +04:00
|
|
|
if (pi->WasTraversed() && mVisitor.ShouldVisitNode(pi)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
mVisitor.VisitNode(pi);
|
|
|
|
for (EdgePool::Iterator child = pi->FirstChild(),
|
|
|
|
child_end = pi->LastChild();
|
|
|
|
child != child_end; ++child) {
|
|
|
|
CheckedPush(aQueue, *child);
|
|
|
|
}
|
2013-05-29 04:42:13 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2007-04-26 01:12:11 +04:00
|
|
|
}
|
|
|
|
|
2013-11-10 01:15:44 +04:00
|
|
|
struct CCGraphDescriber : public LinkedListElement<CCGraphDescriber> {
|
2014-05-13 21:41:38 +04:00
|
|
|
CCGraphDescriber() : mAddress("0x"), mCnt(0), mType(eUnknown) {}
|
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
enum Type {
|
2012-02-17 21:35:22 +04:00
|
|
|
eRefCountedObject,
|
|
|
|
eGCedObject,
|
|
|
|
eGCMarkedObject,
|
|
|
|
eEdge,
|
|
|
|
eRoot,
|
|
|
|
eGarbage,
|
|
|
|
eUnknown
|
|
|
|
};
|
|
|
|
|
|
|
|
nsCString mAddress;
|
|
|
|
nsCString mName;
|
2013-11-12 17:53:51 +04:00
|
|
|
nsCString mCompartmentOrToAddress;
|
2012-08-22 19:56:38 +04:00
|
|
|
uint32_t mCnt;
|
2012-02-17 21:35:22 +04:00
|
|
|
Type mType;
|
|
|
|
};
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2016-04-11 21:40:06 +03:00
|
|
|
class LogStringMessageAsync : public CancelableRunnable {
|
2016-02-19 21:17:25 +03:00
|
|
|
public:
|
2017-06-12 22:34:10 +03:00
|
|
|
explicit LogStringMessageAsync(const nsAString& aMsg)
|
|
|
|
: mozilla::CancelableRunnable("LogStringMessageAsync"), mMsg(aMsg) {}
|
2016-02-19 21:17:25 +03:00
|
|
|
|
|
|
|
NS_IMETHOD Run() override {
|
|
|
|
nsCOMPtr<nsIConsoleService> cs =
|
|
|
|
do_GetService(NS_CONSOLESERVICE_CONTRACTID);
|
|
|
|
if (cs) {
|
|
|
|
cs->LogStringMessage(mMsg.get());
|
|
|
|
}
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
nsString mMsg;
|
|
|
|
};
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
class nsCycleCollectorLogSinkToFile final : public nsICycleCollectorLogSink {
|
2014-05-13 21:13:00 +04:00
|
|
|
public:
|
|
|
|
NS_DECL_ISUPPORTS
|
|
|
|
|
|
|
|
nsCycleCollectorLogSinkToFile()
|
|
|
|
: mProcessIdentifier(base::GetCurrentProcId()),
|
|
|
|
mGCLog("gc-edges"),
|
|
|
|
mCCLog("cc-edges") {}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetFilenameIdentifier(nsAString& aIdentifier) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
aIdentifier = mFilenameIdentifier;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD SetFilenameIdentifier(const nsAString& aIdentifier) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
mFilenameIdentifier = aIdentifier;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetProcessIdentifier(int32_t* aIdentifier) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
*aIdentifier = mProcessIdentifier;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD SetProcessIdentifier(int32_t aIdentifier) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
mProcessIdentifier = aIdentifier;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetGcLog(nsIFile** aPath) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
NS_IF_ADDREF(*aPath = mGCLog.mFile);
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetCcLog(nsIFile** aPath) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
NS_IF_ADDREF(*aPath = mCCLog.mFile);
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD Open(FILE** aGCLog, FILE** aCCLog) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
nsresult rv;
|
|
|
|
|
|
|
|
if (mGCLog.mStream || mCCLog.mStream) {
|
|
|
|
return NS_ERROR_UNEXPECTED;
|
|
|
|
}
|
|
|
|
|
|
|
|
rv = OpenLog(&mGCLog);
|
|
|
|
NS_ENSURE_SUCCESS(rv, rv);
|
|
|
|
*aGCLog = mGCLog.mStream;
|
|
|
|
|
|
|
|
rv = OpenLog(&mCCLog);
|
|
|
|
NS_ENSURE_SUCCESS(rv, rv);
|
|
|
|
*aCCLog = mCCLog.mStream;
|
|
|
|
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD CloseGCLog() override {
|
2014-05-13 21:13:00 +04:00
|
|
|
if (!mGCLog.mStream) {
|
|
|
|
return NS_ERROR_UNEXPECTED;
|
|
|
|
}
|
|
|
|
CloseLog(&mGCLog, NS_LITERAL_STRING("Garbage"));
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD CloseCCLog() override {
|
2014-05-13 21:13:00 +04:00
|
|
|
if (!mCCLog.mStream) {
|
|
|
|
return NS_ERROR_UNEXPECTED;
|
|
|
|
}
|
|
|
|
CloseLog(&mCCLog, NS_LITERAL_STRING("Cycle"));
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
~nsCycleCollectorLogSinkToFile() {
|
|
|
|
if (mGCLog.mStream) {
|
|
|
|
MozillaUnRegisterDebugFILE(mGCLog.mStream);
|
|
|
|
fclose(mGCLog.mStream);
|
|
|
|
}
|
|
|
|
if (mCCLog.mStream) {
|
|
|
|
MozillaUnRegisterDebugFILE(mCCLog.mStream);
|
|
|
|
fclose(mCCLog.mStream);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
struct FileInfo {
|
2014-05-13 21:13:00 +04:00
|
|
|
const char* const mPrefix;
|
|
|
|
nsCOMPtr<nsIFile> mFile;
|
|
|
|
FILE* mStream;
|
|
|
|
|
2014-07-28 21:19:06 +04:00
|
|
|
explicit FileInfo(const char* aPrefix)
|
|
|
|
: mPrefix(aPrefix), mStream(nullptr) {}
|
2014-05-13 21:13:00 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
|
|
|
|
* $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing
|
|
|
|
* file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
|
|
|
|
* try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
|
|
|
|
* on.
|
|
|
|
*/
|
|
|
|
already_AddRefed<nsIFile> CreateTempFile(const char* aPrefix) {
|
|
|
|
nsPrintfCString filename("%s.%d%s%s.log", aPrefix, mProcessIdentifier,
|
|
|
|
mFilenameIdentifier.IsEmpty() ? "" : ".",
|
|
|
|
NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
|
|
|
|
|
|
|
|
// Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
|
|
|
|
// the fallback directories in OpenTempFile. We don't use an nsCOMPtr
|
|
|
|
// here because OpenTempFile uses an in/out param and getter_AddRefs
|
|
|
|
// wouldn't work.
|
|
|
|
nsIFile* logFile = nullptr;
|
|
|
|
if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
|
|
|
|
NS_NewNativeLocalFile(nsCString(env), /* followLinks = */ true, &logFile);
|
|
|
|
}
|
|
|
|
|
|
|
|
// On Android or B2G, this function will open a file named
|
|
|
|
// aFilename under a memory-reporting-specific folder
|
|
|
|
// (/data/local/tmp/memory-reports). Otherwise, it will open a
|
|
|
|
// file named aFilename under "NS_OS_TEMP_DIR".
|
2014-08-25 23:17:15 +04:00
|
|
|
nsresult rv = nsDumpUtils::OpenTempFile(
|
2014-05-13 21:13:00 +04:00
|
|
|
filename, &logFile, NS_LITERAL_CSTRING("memory-reports"));
|
|
|
|
if (NS_FAILED(rv)) {
|
|
|
|
NS_IF_RELEASE(logFile);
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
return dont_AddRef(logFile);
|
|
|
|
}
|
|
|
|
|
|
|
|
nsresult OpenLog(FileInfo* aLog) {
|
|
|
|
// Initially create the log in a file starting with "incomplete-".
|
|
|
|
// We'll move the file and strip off the "incomplete-" once the dump
|
|
|
|
// completes. (We do this because we don't want scripts which poll
|
|
|
|
// the filesystem looking for GC/CC dumps to grab a file before we're
|
|
|
|
// finished writing to it.)
|
|
|
|
nsAutoCString incomplete;
|
|
|
|
incomplete += "incomplete-";
|
|
|
|
incomplete += aLog->mPrefix;
|
|
|
|
MOZ_ASSERT(!aLog->mFile);
|
|
|
|
aLog->mFile = CreateTempFile(incomplete.get());
|
2014-08-25 23:17:15 +04:00
|
|
|
if (NS_WARN_IF(!aLog->mFile)) {
|
2014-05-13 21:13:00 +04:00
|
|
|
return NS_ERROR_UNEXPECTED;
|
2014-08-25 23:17:15 +04:00
|
|
|
}
|
2014-05-13 21:13:00 +04:00
|
|
|
|
|
|
|
MOZ_ASSERT(!aLog->mStream);
|
2016-08-18 08:27:16 +03:00
|
|
|
nsresult rv = aLog->mFile->OpenANSIFileDesc("w", &aLog->mStream);
|
|
|
|
if (NS_WARN_IF(NS_FAILED(rv))) {
|
2014-05-13 21:13:00 +04:00
|
|
|
return NS_ERROR_UNEXPECTED;
|
2014-08-25 23:17:15 +04:00
|
|
|
}
|
2014-05-13 21:13:00 +04:00
|
|
|
MozillaRegisterDebugFILE(aLog->mStream);
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
nsresult CloseLog(FileInfo* aLog, const nsAString& aCollectorKind) {
|
|
|
|
MOZ_ASSERT(aLog->mStream);
|
|
|
|
MOZ_ASSERT(aLog->mFile);
|
|
|
|
|
|
|
|
MozillaUnRegisterDebugFILE(aLog->mStream);
|
|
|
|
fclose(aLog->mStream);
|
|
|
|
aLog->mStream = nullptr;
|
|
|
|
|
|
|
|
// Strip off "incomplete-".
|
|
|
|
nsCOMPtr<nsIFile> logFileFinalDestination = CreateTempFile(aLog->mPrefix);
|
2014-08-25 23:17:15 +04:00
|
|
|
if (NS_WARN_IF(!logFileFinalDestination)) {
|
2014-05-13 21:13:00 +04:00
|
|
|
return NS_ERROR_UNEXPECTED;
|
2014-08-25 23:17:15 +04:00
|
|
|
}
|
2014-05-13 21:13:00 +04:00
|
|
|
|
|
|
|
nsAutoString logFileFinalDestinationName;
|
|
|
|
logFileFinalDestination->GetLeafName(logFileFinalDestinationName);
|
2014-08-25 23:17:15 +04:00
|
|
|
if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty())) {
|
2014-05-13 21:13:00 +04:00
|
|
|
return NS_ERROR_UNEXPECTED;
|
2014-08-25 23:17:15 +04:00
|
|
|
}
|
2014-05-13 21:13:00 +04:00
|
|
|
|
|
|
|
aLog->mFile->MoveTo(/* directory */ nullptr, logFileFinalDestinationName);
|
|
|
|
|
|
|
|
// Save the file path.
|
|
|
|
aLog->mFile = logFileFinalDestination;
|
|
|
|
|
|
|
|
// Log to the error console.
|
2016-02-19 21:17:25 +03:00
|
|
|
nsAutoString logPath;
|
|
|
|
logFileFinalDestination->GetPath(logPath);
|
|
|
|
nsAutoString msg = aCollectorKind +
|
|
|
|
NS_LITERAL_STRING(" Collector log dumped to ") + logPath;
|
|
|
|
|
|
|
|
// We don't want any JS to run between ScanRoots and CollectWhite calls,
|
|
|
|
// and since ScanRoots calls this method, better to log the message
|
|
|
|
// asynchronously.
|
|
|
|
RefPtr<LogStringMessageAsync> log = new LogStringMessageAsync(msg);
|
|
|
|
NS_DispatchToCurrentThread(log);
|
2014-05-13 21:13:00 +04:00
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
int32_t mProcessIdentifier;
|
|
|
|
nsString mFilenameIdentifier;
|
|
|
|
FileInfo mGCLog;
|
|
|
|
FileInfo mCCLog;
|
|
|
|
};
|
|
|
|
|
|
|
|
NS_IMPL_ISUPPORTS(nsCycleCollectorLogSinkToFile, nsICycleCollectorLogSink)
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
class nsCycleCollectorLogger final : public nsICycleCollectorListener {
|
2018-08-16 23:47:54 +03:00
|
|
|
~nsCycleCollectorLogger() { ClearDescribers(); }
|
2014-05-13 21:13:00 +04:00
|
|
|
|
2010-08-12 04:03:23 +04:00
|
|
|
public:
|
2014-05-13 21:13:00 +04:00
|
|
|
nsCycleCollectorLogger()
|
|
|
|
: mLogSink(nsCycleCollector_createLogSink()),
|
|
|
|
mWantAllTraces(false),
|
|
|
|
mDisableLog(false),
|
|
|
|
mWantAfterProcessing(false),
|
|
|
|
mCCLog(nullptr) {}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
NS_DECL_ISUPPORTS
|
2010-08-12 04:03:23 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void SetAllTraces() { mWantAllTraces = true; }
|
2012-09-28 21:11:33 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
bool IsAllTraces() { return mWantAllTraces; }
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD AllTraces(nsICycleCollectorListener** aListener) override {
|
2014-05-05 21:30:39 +04:00
|
|
|
SetAllTraces();
|
|
|
|
NS_ADDREF(*aListener = this);
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2012-01-02 01:48:42 +04:00
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetWantAllTraces(bool* aAllTraces) override {
|
2014-05-05 21:30:39 +04:00
|
|
|
*aAllTraces = mWantAllTraces;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2012-02-17 21:35:22 +04:00
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetDisableLog(bool* aDisableLog) override {
|
2014-05-05 21:30:39 +04:00
|
|
|
*aDisableLog = mDisableLog;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2012-02-17 21:35:22 +04:00
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD SetDisableLog(bool aDisableLog) override {
|
2014-05-05 21:30:39 +04:00
|
|
|
mDisableLog = aDisableLog;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2012-02-17 21:35:22 +04:00
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetWantAfterProcessing(bool* aWantAfterProcessing) override {
|
2014-05-05 21:30:39 +04:00
|
|
|
*aWantAfterProcessing = mWantAfterProcessing;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2012-02-17 21:35:22 +04:00
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD SetWantAfterProcessing(bool aWantAfterProcessing) override {
|
2014-05-05 21:30:39 +04:00
|
|
|
mWantAfterProcessing = aWantAfterProcessing;
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2012-01-02 01:48:42 +04:00
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD GetLogSink(nsICycleCollectorLogSink** aLogSink) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
NS_ADDREF(*aLogSink = mLogSink);
|
2014-05-05 21:30:39 +04:00
|
|
|
return NS_OK;
|
|
|
|
}
|
2012-10-16 06:12:14 +04:00
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
NS_IMETHOD SetLogSink(nsICycleCollectorLogSink* aLogSink) override {
|
2014-05-13 21:13:00 +04:00
|
|
|
if (!aLogSink) {
|
|
|
|
return NS_ERROR_INVALID_ARG;
|
|
|
|
}
|
|
|
|
mLogSink = aLogSink;
|
2014-05-05 21:30:39 +04:00
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
nsresult Begin() {
|
2014-05-13 21:13:00 +04:00
|
|
|
nsresult rv;
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mCurrentAddress.AssignLiteral("0x");
|
|
|
|
ClearDescribers();
|
|
|
|
if (mDisableLog) {
|
2014-02-01 02:43:08 +04:00
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
2014-05-13 21:13:00 +04:00
|
|
|
FILE* gcLog;
|
|
|
|
rv = mLogSink->Open(&gcLog, &mCCLog);
|
|
|
|
NS_ENSURE_SUCCESS(rv, rv);
|
2014-05-05 21:30:39 +04:00
|
|
|
// Dump the JS heap.
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2016-09-14 16:47:32 +03:00
|
|
|
if (data && data->mContext) {
|
2017-04-28 00:10:15 +03:00
|
|
|
data->mContext->Runtime()->DumpJSHeap(gcLog);
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2014-05-13 21:13:00 +04:00
|
|
|
rv = mLogSink->CloseGCLog();
|
|
|
|
NS_ENSURE_SUCCESS(rv, rv);
|
2013-07-19 21:00:53 +04:00
|
|
|
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "# WantAllTraces=%s\n", mWantAllTraces ? "true" : "false");
|
2014-05-05 21:30:39 +04:00
|
|
|
return NS_OK;
|
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void NoteRefCountedObject(uint64_t aAddress, uint32_t aRefCount,
|
|
|
|
const char* aObjectDescription) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "%p [rc=%u] %s\n", (void*)aAddress, aRefCount,
|
2014-05-05 21:30:39 +04:00
|
|
|
aObjectDescription);
|
|
|
|
}
|
|
|
|
if (mWantAfterProcessing) {
|
|
|
|
CCGraphDescriber* d = new CCGraphDescriber();
|
|
|
|
mDescribers.insertBack(d);
|
|
|
|
mCurrentAddress.AssignLiteral("0x");
|
|
|
|
mCurrentAddress.AppendInt(aAddress, 16);
|
|
|
|
d->mType = CCGraphDescriber::eRefCountedObject;
|
|
|
|
d->mAddress = mCurrentAddress;
|
2014-05-13 21:41:38 +04:00
|
|
|
d->mCnt = aRefCount;
|
2014-05-05 21:30:39 +04:00
|
|
|
d->mName.Append(aObjectDescription);
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void NoteGCedObject(uint64_t aAddress, bool aMarked,
|
|
|
|
const char* aObjectDescription,
|
|
|
|
uint64_t aCompartmentAddress) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "%p [gc%s] %s\n", (void*)aAddress,
|
2014-05-05 21:30:39 +04:00
|
|
|
aMarked ? ".marked" : "", aObjectDescription);
|
|
|
|
}
|
|
|
|
if (mWantAfterProcessing) {
|
|
|
|
CCGraphDescriber* d = new CCGraphDescriber();
|
|
|
|
mDescribers.insertBack(d);
|
|
|
|
mCurrentAddress.AssignLiteral("0x");
|
|
|
|
mCurrentAddress.AppendInt(aAddress, 16);
|
|
|
|
d->mType = aMarked ? CCGraphDescriber::eGCMarkedObject
|
2014-08-25 23:17:15 +04:00
|
|
|
: CCGraphDescriber::eGCedObject;
|
2014-05-05 21:30:39 +04:00
|
|
|
d->mAddress = mCurrentAddress;
|
|
|
|
d->mName.Append(aObjectDescription);
|
|
|
|
if (aCompartmentAddress) {
|
|
|
|
d->mCompartmentOrToAddress.AssignLiteral("0x");
|
|
|
|
d->mCompartmentOrToAddress.AppendInt(aCompartmentAddress, 16);
|
|
|
|
} else {
|
|
|
|
d->mCompartmentOrToAddress.SetIsVoid(true);
|
|
|
|
}
|
2011-06-22 21:41:17 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void NoteEdge(uint64_t aToAddress, const char* aEdgeName) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "> %p %s\n", (void*)aToAddress, aEdgeName);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
if (mWantAfterProcessing) {
|
|
|
|
CCGraphDescriber* d = new CCGraphDescriber();
|
|
|
|
mDescribers.insertBack(d);
|
|
|
|
d->mType = CCGraphDescriber::eEdge;
|
|
|
|
d->mAddress = mCurrentAddress;
|
|
|
|
d->mCompartmentOrToAddress.AssignLiteral("0x");
|
|
|
|
d->mCompartmentOrToAddress.AppendInt(aToAddress, 16);
|
|
|
|
d->mName.Append(aEdgeName);
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void NoteWeakMapEntry(uint64_t aMap, uint64_t aKey, uint64_t aKeyDelegate,
|
|
|
|
uint64_t aValue) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
|
2014-05-05 21:30:39 +04:00
|
|
|
(void*)aMap, (void*)aKey, (void*)aKeyDelegate, (void*)aValue);
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
// We don't support after-processing for weak map entries.
|
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void NoteIncrementalRoot(uint64_t aAddress) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "IncrementalRoot %p\n", (void*)aAddress);
|
2013-07-26 19:12:51 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
// We don't support after-processing for incremental roots.
|
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void BeginResults() {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fputs("==========\n", mCCLog);
|
2014-03-15 03:07:07 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void DescribeRoot(uint64_t aAddress, uint32_t aKnownEdges) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "%p [known=%u]\n", (void*)aAddress, aKnownEdges);
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mWantAfterProcessing) {
|
|
|
|
CCGraphDescriber* d = new CCGraphDescriber();
|
|
|
|
mDescribers.insertBack(d);
|
|
|
|
d->mType = CCGraphDescriber::eRoot;
|
|
|
|
d->mAddress.AppendInt(aAddress, 16);
|
|
|
|
d->mCnt = aKnownEdges;
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void DescribeGarbage(uint64_t aAddress) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
fprintf(mCCLog, "%p [garbage]\n", (void*)aAddress);
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mWantAfterProcessing) {
|
|
|
|
CCGraphDescriber* d = new CCGraphDescriber();
|
|
|
|
mDescribers.insertBack(d);
|
|
|
|
d->mType = CCGraphDescriber::eGarbage;
|
|
|
|
d->mAddress.AppendInt(aAddress, 16);
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
void End() {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mDisableLog) {
|
2014-05-13 21:13:00 +04:00
|
|
|
mCCLog = nullptr;
|
2016-09-02 10:12:24 +03:00
|
|
|
Unused << NS_WARN_IF(NS_FAILED(mLogSink->CloseCCLog()));
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
NS_IMETHOD ProcessNext(nsICycleCollectorHandler* aHandler,
|
2015-03-21 19:28:04 +03:00
|
|
|
bool* aCanContinue) override {
|
2014-05-13 21:41:38 +04:00
|
|
|
if (NS_WARN_IF(!aHandler) || NS_WARN_IF(!mWantAfterProcessing)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return NS_ERROR_UNEXPECTED;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
CCGraphDescriber* d = mDescribers.popFirst();
|
|
|
|
if (d) {
|
|
|
|
switch (d->mType) {
|
|
|
|
case CCGraphDescriber::eRefCountedObject:
|
|
|
|
aHandler->NoteRefCountedObject(d->mAddress, d->mCnt, d->mName);
|
|
|
|
break;
|
|
|
|
case CCGraphDescriber::eGCedObject:
|
|
|
|
case CCGraphDescriber::eGCMarkedObject:
|
|
|
|
aHandler->NoteGCedObject(
|
|
|
|
d->mAddress, d->mType == CCGraphDescriber::eGCMarkedObject,
|
|
|
|
d->mName, d->mCompartmentOrToAddress);
|
|
|
|
break;
|
|
|
|
case CCGraphDescriber::eEdge:
|
|
|
|
aHandler->NoteEdge(d->mAddress, d->mCompartmentOrToAddress, d->mName);
|
|
|
|
break;
|
|
|
|
case CCGraphDescriber::eRoot:
|
|
|
|
aHandler->DescribeRoot(d->mAddress, d->mCnt);
|
|
|
|
break;
|
|
|
|
case CCGraphDescriber::eGarbage:
|
|
|
|
aHandler->DescribeGarbage(d->mAddress);
|
|
|
|
break;
|
|
|
|
case CCGraphDescriber::eUnknown:
|
2018-06-18 08:43:11 +03:00
|
|
|
MOZ_ASSERT_UNREACHABLE("CCGraphDescriber::eUnknown");
|
2014-05-05 21:30:39 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
delete d;
|
2012-02-17 21:35:22 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!(*aCanContinue = !mDescribers.isEmpty())) {
|
|
|
|
mCurrentAddress.AssignLiteral("0x");
|
|
|
|
}
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2015-06-05 00:41:31 +03:00
|
|
|
NS_IMETHOD AsLogger(nsCycleCollectorLogger** aRetVal) override {
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<nsCycleCollectorLogger> rval = this;
|
2015-06-05 00:41:31 +03:00
|
|
|
rval.forget(aRetVal);
|
|
|
|
return NS_OK;
|
|
|
|
}
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2010-08-12 04:03:23 +04:00
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
void ClearDescribers() {
|
|
|
|
CCGraphDescriber* d;
|
2014-05-13 21:41:38 +04:00
|
|
|
while ((d = mDescribers.popFirst())) {
|
2014-05-05 21:30:39 +04:00
|
|
|
delete d;
|
2013-11-10 01:15:44 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-11-10 01:15:44 +04:00
|
|
|
|
2014-05-13 21:13:00 +04:00
|
|
|
nsCOMPtr<nsICycleCollectorLogSink> mLogSink;
|
2014-05-05 21:30:39 +04:00
|
|
|
bool mWantAllTraces;
|
|
|
|
bool mDisableLog;
|
|
|
|
bool mWantAfterProcessing;
|
|
|
|
nsCString mCurrentAddress;
|
|
|
|
mozilla::LinkedList<CCGraphDescriber> mDescribers;
|
2014-05-13 21:13:00 +04:00
|
|
|
FILE* mCCLog;
|
2010-08-12 04:03:23 +04:00
|
|
|
};
|
|
|
|
|
2014-04-27 11:06:00 +04:00
|
|
|
NS_IMPL_ISUPPORTS(nsCycleCollectorLogger, nsICycleCollectorListener)
|
2010-08-12 04:03:23 +04:00
|
|
|
|
2018-08-16 23:47:54 +03:00
|
|
|
already_AddRefed<nsICycleCollectorListener> nsCycleCollector_createLogger() {
|
|
|
|
nsCOMPtr<nsICycleCollectorListener> logger = new nsCycleCollectorLogger();
|
|
|
|
return logger.forget();
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
|
|
|
|
2014-12-10 03:22:51 +03:00
|
|
|
static bool GCThingIsGrayCCThing(JS::GCCellPtr thing) {
|
2018-11-15 12:11:10 +03:00
|
|
|
return JS::IsCCTraceKind(thing.kind()) && JS::GCThingIsMarkedGray(thing);
|
2014-12-10 03:22:51 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool ValueIsGrayCCThing(const JS::Value& value) {
|
2018-11-15 12:11:10 +03:00
|
|
|
return JS::IsCCTraceKind(value.traceKind()) &&
|
2014-12-05 20:38:34 +03:00
|
|
|
JS::GCThingIsMarkedGray(value.toGCCellPtr());
|
2014-12-10 03:22:51 +03:00
|
|
|
}
|
|
|
|
|
2007-04-26 01:12:11 +04:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// Bacon & Rajan's |MarkRoots| routine.
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2015-03-21 19:28:04 +03:00
|
|
|
class CCGraphBuilder final : public nsCycleCollectionTraversalCallback,
|
2014-05-13 21:41:38 +04:00
|
|
|
public nsCycleCollectionNoteRootCallback {
|
2007-04-26 01:12:11 +04:00
|
|
|
private:
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraph& mGraph;
|
2014-05-13 21:41:38 +04:00
|
|
|
CycleCollectorResults& mResults;
|
2014-05-05 21:30:39 +04:00
|
|
|
NodePool::Builder mNodeBuilder;
|
|
|
|
EdgePool::Builder mEdgeBuilder;
|
2016-09-19 16:23:40 +03:00
|
|
|
MOZ_INIT_OUTSIDE_CTOR PtrInfo* mCurrPi;
|
2014-05-13 21:41:38 +04:00
|
|
|
nsCycleCollectionParticipant* mJSParticipant;
|
|
|
|
nsCycleCollectionParticipant* mJSZoneParticipant;
|
2014-05-05 21:30:39 +04:00
|
|
|
nsCString mNextEdgeName;
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<nsCycleCollectorLogger> mLogger;
|
2014-05-05 21:30:39 +04:00
|
|
|
bool mMergeZones;
|
2014-12-20 21:35:23 +03:00
|
|
|
nsAutoPtr<NodePool::Enumerator> mCurrNode;
|
2017-09-12 18:59:57 +03:00
|
|
|
uint32_t mNoteChildCount;
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2018-09-13 04:20:33 +03:00
|
|
|
struct PtrInfoCache : public MruCache<void*, PtrInfo*, PtrInfoCache, 491> {
|
|
|
|
static HashNumber Hash(const void* aKey) { return HashGeneric(aKey); }
|
|
|
|
static bool Match(const void* aKey, const PtrInfo* aVal) {
|
|
|
|
return aVal->mPointer == aKey;
|
2018-09-04 23:22:37 +03:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2018-09-13 04:20:33 +03:00
|
|
|
PtrInfoCache mGraphCache;
|
2018-09-04 23:22:37 +03:00
|
|
|
|
2007-04-26 01:12:11 +04:00
|
|
|
public:
|
2014-05-13 21:41:38 +04:00
|
|
|
CCGraphBuilder(CCGraph& aGraph, CycleCollectorResults& aResults,
|
2017-04-28 00:10:15 +03:00
|
|
|
CycleCollectedJSRuntime* aCCRuntime,
|
2015-06-05 00:41:31 +03:00
|
|
|
nsCycleCollectorLogger* aLogger, bool aMergeZones);
|
2014-07-09 23:31:00 +04:00
|
|
|
virtual ~CCGraphBuilder();
|
2014-05-05 21:30:39 +04:00
|
|
|
|
|
|
|
bool WantAllTraces() const {
|
|
|
|
return nsCycleCollectionNoteRootCallback::WantAllTraces();
|
|
|
|
}
|
2013-05-21 00:08:11 +04:00
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
bool AddPurpleRoot(void* aRoot, nsCycleCollectionParticipant* aParti);
|
2013-05-25 02:00:36 +04:00
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
// This is called when all roots have been added to the graph, to prepare for
|
|
|
|
// BuildGraph().
|
|
|
|
void DoneAddingRoots();
|
|
|
|
|
|
|
|
// Do some work traversing nodes in the graph. Returns true if this graph
|
|
|
|
// building is finished.
|
|
|
|
bool BuildGraph(SliceBudget& aBudget);
|
|
|
|
|
2018-09-04 23:22:37 +03:00
|
|
|
void RemoveCachedEntry(void* aPtr) { mGraphCache.Remove(aPtr); }
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2008-02-16 01:23:16 +03:00
|
|
|
private:
|
2014-12-20 21:35:23 +03:00
|
|
|
PtrInfo* AddNode(void* aPtr, nsCycleCollectionParticipant* aParticipant);
|
2014-12-20 21:35:23 +03:00
|
|
|
PtrInfo* AddWeakMapNode(JS::GCCellPtr aThing);
|
|
|
|
PtrInfo* AddWeakMapNode(JSObject* aObject);
|
|
|
|
|
2014-12-20 21:35:24 +03:00
|
|
|
void SetFirstChild() { mCurrPi->SetFirstChild(mEdgeBuilder.Mark()); }
|
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
void SetLastChild() { mCurrPi->SetLastChild(mEdgeBuilder.Mark()); }
|
|
|
|
|
2012-05-03 23:28:11 +04:00
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
// nsCycleCollectionNoteRootCallback methods.
|
2017-07-29 02:11:03 +03:00
|
|
|
NS_IMETHOD_(void)
|
|
|
|
NoteXPCOMRoot(nsISupports* aRoot,
|
2017-11-06 06:37:28 +03:00
|
|
|
nsCycleCollectionParticipant* aParticipant) override;
|
|
|
|
NS_IMETHOD_(void) NoteJSRoot(JSObject* aRoot) override;
|
2014-05-13 21:41:38 +04:00
|
|
|
NS_IMETHOD_(void)
|
|
|
|
NoteNativeRoot(void* aRoot,
|
2017-11-06 06:37:28 +03:00
|
|
|
nsCycleCollectionParticipant* aParticipant) override;
|
2014-12-05 20:38:32 +03:00
|
|
|
NS_IMETHOD_(void)
|
|
|
|
NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey, JSObject* aKdelegate,
|
2017-11-06 06:37:28 +03:00
|
|
|
JS::GCCellPtr aVal) override;
|
2014-05-05 21:30:39 +04:00
|
|
|
|
|
|
|
// nsCycleCollectionTraversalCallback methods.
|
2014-05-13 21:41:38 +04:00
|
|
|
NS_IMETHOD_(void)
|
|
|
|
DescribeRefCountedNode(nsrefcnt aRefCount, const char* aObjName) override;
|
|
|
|
NS_IMETHOD_(void)
|
|
|
|
DescribeGCedNode(bool aIsMarked, const char* aObjName,
|
2017-11-06 06:37:28 +03:00
|
|
|
uint64_t aCompartmentAddress) override;
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2017-11-06 06:37:28 +03:00
|
|
|
NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild) override;
|
|
|
|
NS_IMETHOD_(void) NoteJSChild(const JS::GCCellPtr& aThing) override;
|
2014-05-13 21:41:38 +04:00
|
|
|
NS_IMETHOD_(void)
|
|
|
|
NoteNativeChild(void* aChild,
|
2017-11-06 06:37:28 +03:00
|
|
|
nsCycleCollectionParticipant* aParticipant) override;
|
|
|
|
NS_IMETHOD_(void) NoteNextEdgeName(const char* aName) override;
|
2012-05-03 23:28:11 +04:00
|
|
|
|
2012-02-27 01:18:44 +04:00
|
|
|
private:
|
2014-12-05 20:38:34 +03:00
|
|
|
void NoteJSChild(JS::GCCellPtr aChild);
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
NS_IMETHOD_(void)
|
|
|
|
NoteRoot(void* aRoot, nsCycleCollectionParticipant* aParticipant) {
|
|
|
|
MOZ_ASSERT(aRoot);
|
|
|
|
MOZ_ASSERT(aParticipant);
|
2012-05-03 23:28:11 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!aParticipant->CanSkipInCC(aRoot) || MOZ_UNLIKELY(WantAllTraces())) {
|
|
|
|
AddNode(aRoot, aParticipant);
|
2012-05-03 23:28:11 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2012-05-03 23:28:11 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
NS_IMETHOD_(void)
|
|
|
|
NoteChild(void* aChild, nsCycleCollectionParticipant* aCp,
|
2015-04-09 20:56:00 +03:00
|
|
|
nsCString& aEdgeName) {
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* childPi = AddNode(aChild, aCp);
|
|
|
|
if (!childPi) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
mEdgeBuilder.Add(childPi);
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger) {
|
|
|
|
mLogger->NoteEdge((uint64_t)aChild, aEdgeName.get());
|
2012-02-27 01:18:44 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
++childPi->mInternalRefs;
|
|
|
|
}
|
2012-06-27 19:09:56 +04:00
|
|
|
|
2015-06-02 00:11:06 +03:00
|
|
|
JS::Zone* MergeZone(JS::GCCellPtr aGcthing) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!mMergeZones) {
|
|
|
|
return nullptr;
|
2012-06-27 19:09:56 +04:00
|
|
|
}
|
2015-06-02 00:11:08 +03:00
|
|
|
JS::Zone* zone = JS::GetTenuredGCThingZone(aGcthing);
|
2014-05-05 21:30:39 +04:00
|
|
|
if (js::IsSystemZone(zone)) {
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
return zone;
|
|
|
|
}
|
2007-04-26 01:12:11 +04:00
|
|
|
};
|
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::CCGraphBuilder(CCGraph& aGraph, CycleCollectorResults& aResults,
|
2017-04-28 00:10:15 +03:00
|
|
|
CycleCollectedJSRuntime* aCCRuntime,
|
2015-06-05 00:41:31 +03:00
|
|
|
nsCycleCollectorLogger* aLogger,
|
2013-03-17 07:36:37 +04:00
|
|
|
bool aMergeZones)
|
2014-05-13 21:41:38 +04:00
|
|
|
: mGraph(aGraph),
|
|
|
|
mResults(aResults),
|
|
|
|
mNodeBuilder(aGraph.mNodes),
|
|
|
|
mEdgeBuilder(aGraph.mEdges),
|
|
|
|
mJSParticipant(nullptr),
|
|
|
|
mJSZoneParticipant(nullptr),
|
2015-06-05 00:41:31 +03:00
|
|
|
mLogger(aLogger),
|
2014-05-13 21:41:38 +04:00
|
|
|
mMergeZones(aMergeZones),
|
2017-09-12 18:59:57 +03:00
|
|
|
mNoteChildCount(0) {
|
2018-09-04 23:22:37 +03:00
|
|
|
// 4096 is an allocation bucket size.
|
|
|
|
static_assert(sizeof(CCGraphBuilder) <= 4096,
|
|
|
|
"Don't create too large CCGraphBuilder objects");
|
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
if (aCCRuntime) {
|
|
|
|
mJSParticipant = aCCRuntime->GCThingParticipant();
|
|
|
|
mJSZoneParticipant = aCCRuntime->ZoneParticipant();
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger) {
|
2015-05-15 20:33:08 +03:00
|
|
|
mFlags |= nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO;
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger->IsAllTraces()) {
|
2015-05-15 20:33:08 +03:00
|
|
|
mFlags |= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES;
|
2014-05-05 21:30:39 +04:00
|
|
|
mWantAllTraces = true; // for nsCycleCollectionNoteRootCallback
|
2010-08-12 04:03:23 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2012-01-02 01:48:42 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mMergeZones = mMergeZones && MOZ_LIKELY(!WantAllTraces());
|
2013-05-21 00:08:11 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
|
|
|
|
nsCycleCollectionTraversalCallback::WantAllTraces());
|
2007-04-26 01:12:11 +04:00
|
|
|
}
|
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::~CCGraphBuilder() {}
|
2007-04-26 01:12:11 +04:00
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
PtrInfo* CCGraphBuilder::AddNode(void* aPtr,
|
|
|
|
nsCycleCollectionParticipant* aParticipant) {
|
2018-08-14 02:25:51 +03:00
|
|
|
if (mGraph.mOutOfMemory) {
|
2015-04-09 23:00:00 +03:00
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
2018-09-13 04:20:33 +03:00
|
|
|
PtrInfoCache::Entry cached = mGraphCache.Lookup(aPtr);
|
|
|
|
if (cached) {
|
|
|
|
MOZ_ASSERT(cached.Data()->mParticipant == aParticipant,
|
2018-09-04 23:22:37 +03:00
|
|
|
"nsCycleCollectionParticipant shouldn't change!");
|
2018-09-13 04:20:33 +03:00
|
|
|
return cached.Data();
|
2018-09-04 23:22:37 +03:00
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* result;
|
2018-08-14 02:25:51 +03:00
|
|
|
auto p = mGraph.mPtrInfoMap.lookupForAdd(aPtr);
|
|
|
|
if (!p) {
|
|
|
|
// New entry
|
2014-05-05 21:30:39 +04:00
|
|
|
result = mNodeBuilder.Add(aPtr, aParticipant);
|
2016-05-05 23:54:18 +03:00
|
|
|
if (!result) {
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
2018-08-14 02:25:51 +03:00
|
|
|
if (!mGraph.mPtrInfoMap.add(p, result)) {
|
|
|
|
// `result` leaks here, but we can't free it because it's
|
|
|
|
// pool-allocated within NodePool.
|
|
|
|
mGraph.mOutOfMemory = true;
|
|
|
|
MOZ_ASSERT(false, "OOM while building cycle collector graph");
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
} else {
|
2018-08-14 02:25:51 +03:00
|
|
|
result = *p;
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(result->mParticipant == aParticipant,
|
|
|
|
"nsCycleCollectionParticipant shouldn't change!");
|
|
|
|
}
|
2018-08-14 02:25:51 +03:00
|
|
|
|
2018-09-13 04:20:33 +03:00
|
|
|
cached.Set(result);
|
2018-09-04 23:22:37 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
return result;
|
2007-04-26 01:12:11 +04:00
|
|
|
}
|
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
bool CCGraphBuilder::AddPurpleRoot(void* aRoot,
|
|
|
|
nsCycleCollectionParticipant* aParti) {
|
2017-07-29 01:24:17 +03:00
|
|
|
ToParticipant(aRoot, &aParti);
|
2014-12-20 21:35:23 +03:00
|
|
|
|
|
|
|
if (WantAllTraces() || !aParti->CanSkipInCC(aRoot)) {
|
|
|
|
PtrInfo* pinfo = AddNode(aRoot, aParti);
|
|
|
|
if (!pinfo) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
void CCGraphBuilder::DoneAddingRoots() {
|
|
|
|
// We've finished adding roots, and everything in the graph is a root.
|
|
|
|
mGraph.mRootCount = mGraph.MapCount();
|
|
|
|
|
|
|
|
mCurrNode = new NodePool::Enumerator(mGraph.mNodes);
|
|
|
|
}
|
|
|
|
|
|
|
|
MOZ_NEVER_INLINE bool CCGraphBuilder::BuildGraph(SliceBudget& aBudget) {
|
2017-09-12 18:59:57 +03:00
|
|
|
const intptr_t kNumNodesBetweenTimeChecks = 1000;
|
2014-12-20 21:35:23 +03:00
|
|
|
const intptr_t kStep = SliceBudget::CounterReset / kNumNodesBetweenTimeChecks;
|
|
|
|
|
|
|
|
MOZ_ASSERT(mCurrNode);
|
|
|
|
|
|
|
|
while (!aBudget.isOverBudget() && !mCurrNode->IsDone()) {
|
2017-09-12 18:59:57 +03:00
|
|
|
mNoteChildCount = 0;
|
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
PtrInfo* pi = mCurrNode->GetNext();
|
|
|
|
if (!pi) {
|
|
|
|
MOZ_CRASH();
|
|
|
|
}
|
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
mCurrPi = pi;
|
|
|
|
|
2014-12-20 21:35:24 +03:00
|
|
|
// We need to call SetFirstChild() even on deleted nodes, to set their
|
2014-12-20 21:35:23 +03:00
|
|
|
// firstChild() that may be read by a prior non-deleted neighbor.
|
2014-12-20 21:35:24 +03:00
|
|
|
SetFirstChild();
|
2014-12-20 21:35:23 +03:00
|
|
|
|
|
|
|
if (pi->mParticipant) {
|
2017-01-03 22:46:49 +03:00
|
|
|
nsresult rv = pi->mParticipant->TraverseNativeAndJS(pi->mPointer, *this);
|
2015-05-15 20:33:09 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!NS_FAILED(rv),
|
|
|
|
"Cycle collector Traverse method failed");
|
2014-12-20 21:35:23 +03:00
|
|
|
}
|
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
if (mCurrNode->AtBlockEnd()) {
|
|
|
|
SetLastChild();
|
|
|
|
}
|
2014-12-20 21:35:23 +03:00
|
|
|
|
2017-09-12 18:59:57 +03:00
|
|
|
aBudget.step(kStep * (mNoteChildCount + 1));
|
2014-12-20 21:35:23 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!mCurrNode->IsDone()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mGraph.mRootCount > 0) {
|
|
|
|
SetLastChild();
|
|
|
|
}
|
|
|
|
|
|
|
|
mCurrNode = nullptr;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2007-11-02 01:51:57 +03:00
|
|
|
NS_IMETHODIMP_(void)
|
2017-07-29 02:11:03 +03:00
|
|
|
CCGraphBuilder::NoteXPCOMRoot(nsISupports* aRoot,
|
|
|
|
nsCycleCollectionParticipant* aParticipant) {
|
|
|
|
MOZ_ASSERT(aRoot == CanonicalizeXPCOMParticipant(aRoot));
|
2007-11-02 01:51:57 +03:00
|
|
|
|
2017-07-29 02:11:03 +03:00
|
|
|
#ifdef DEBUG
|
2014-05-13 21:41:38 +04:00
|
|
|
nsXPCOMCycleCollectionParticipant* cp;
|
|
|
|
ToParticipant(aRoot, &cp);
|
2017-07-29 02:11:03 +03:00
|
|
|
MOZ_ASSERT(aParticipant == cp);
|
|
|
|
#endif
|
2007-11-02 01:51:57 +03:00
|
|
|
|
2017-07-29 02:11:03 +03:00
|
|
|
NoteRoot(aRoot, aParticipant);
|
2007-11-02 01:51:57 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
NS_IMETHODIMP_(void)
|
2015-05-29 22:57:23 +03:00
|
|
|
CCGraphBuilder::NoteJSRoot(JSObject* aRoot) {
|
2015-06-02 00:11:06 +03:00
|
|
|
if (JS::Zone* zone = MergeZone(JS::GCCellPtr(aRoot))) {
|
2014-05-05 21:30:39 +04:00
|
|
|
NoteRoot(zone, mJSZoneParticipant);
|
|
|
|
} else {
|
2014-05-13 21:41:38 +04:00
|
|
|
NoteRoot(aRoot, mJSParticipant);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2012-05-03 23:28:11 +04:00
|
|
|
}
|
2007-11-02 01:51:57 +03:00
|
|
|
|
2012-05-03 23:28:11 +04:00
|
|
|
NS_IMETHODIMP_(void)
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::NoteNativeRoot(void* aRoot,
|
2014-05-13 21:41:38 +04:00
|
|
|
nsCycleCollectionParticipant* aParticipant) {
|
|
|
|
NoteRoot(aRoot, aParticipant);
|
2007-11-02 01:51:57 +03:00
|
|
|
}
|
|
|
|
|
2007-09-18 04:30:06 +04:00
|
|
|
NS_IMETHODIMP_(void)
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::DescribeRefCountedNode(nsrefcnt aRefCount,
|
|
|
|
const char* aObjName) {
|
2017-09-15 23:00:17 +03:00
|
|
|
mCurrPi->AnnotatedReleaseAssert(aRefCount != 0,
|
|
|
|
"CCed refcounted object has zero refcount");
|
|
|
|
mCurrPi->AnnotatedReleaseAssert(
|
|
|
|
aRefCount != UINT32_MAX,
|
|
|
|
"CCed refcounted object has overflowing refcount");
|
2015-05-15 20:33:09 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mResults.mVisitedRefCounted++;
|
2011-06-22 21:41:17 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger) {
|
|
|
|
mLogger->NoteRefCountedObject((uint64_t)mCurrPi->mPointer, aRefCount,
|
|
|
|
aObjName);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2011-06-22 21:41:17 +04:00
|
|
|
|
2014-12-20 21:35:24 +03:00
|
|
|
mCurrPi->mRefCount = aRefCount;
|
2011-06-24 01:10:52 +04:00
|
|
|
}
|
2007-07-06 02:38:38 +04:00
|
|
|
|
2011-06-24 01:10:52 +04:00
|
|
|
NS_IMETHODIMP_(void)
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::DescribeGCedNode(bool aIsMarked, const char* aObjName,
|
2013-11-12 17:53:51 +04:00
|
|
|
uint64_t aCompartmentAddress) {
|
2014-05-13 21:41:38 +04:00
|
|
|
uint32_t refCount = aIsMarked ? UINT32_MAX : 0;
|
2014-05-05 21:30:39 +04:00
|
|
|
mResults.mVisitedGCed++;
|
2011-06-22 21:41:17 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger) {
|
|
|
|
mLogger->NoteGCedObject((uint64_t)mCurrPi->mPointer, aIsMarked, aObjName,
|
|
|
|
aCompartmentAddress);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2011-06-22 21:41:17 +04:00
|
|
|
|
2014-12-20 21:35:24 +03:00
|
|
|
mCurrPi->mRefCount = refCount;
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2007-09-18 04:30:06 +04:00
|
|
|
NS_IMETHODIMP_(void)
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::NoteXPCOMChild(nsISupports* aChild) {
|
2014-05-05 21:30:39 +04:00
|
|
|
nsCString edgeName;
|
|
|
|
if (WantDebugInfo()) {
|
|
|
|
edgeName.Assign(mNextEdgeName);
|
|
|
|
mNextEdgeName.Truncate();
|
|
|
|
}
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2007-01-09 04:33:02 +03:00
|
|
|
|
2017-09-12 18:59:57 +03:00
|
|
|
++mNoteChildCount;
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
nsXPCOMCycleCollectionParticipant* cp;
|
|
|
|
ToParticipant(aChild, &cp);
|
|
|
|
if (cp && (!cp->CanSkipThis(aChild) || WantAllTraces())) {
|
|
|
|
NoteChild(aChild, cp, edgeName);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2007-09-18 04:30:06 +04:00
|
|
|
NS_IMETHODIMP_(void)
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::NoteNativeChild(void* aChild,
|
2014-05-13 21:41:38 +04:00
|
|
|
nsCycleCollectionParticipant* aParticipant) {
|
2014-05-05 21:30:39 +04:00
|
|
|
nsCString edgeName;
|
|
|
|
if (WantDebugInfo()) {
|
|
|
|
edgeName.Assign(mNextEdgeName);
|
|
|
|
mNextEdgeName.Truncate();
|
|
|
|
}
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!aChild) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2007-05-24 18:10:02 +04:00
|
|
|
|
2017-09-12 18:59:57 +03:00
|
|
|
++mNoteChildCount;
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_ASSERT(aParticipant, "Need a nsCycleCollectionParticipant!");
|
2014-12-13 00:42:21 +03:00
|
|
|
if (!aParticipant->CanSkipThis(aChild) || WantAllTraces()) {
|
|
|
|
NoteChild(aChild, aParticipant, edgeName);
|
|
|
|
}
|
2007-05-24 18:10:02 +04:00
|
|
|
}
|
|
|
|
|
2007-09-18 04:30:06 +04:00
|
|
|
NS_IMETHODIMP_(void)
|
2016-09-24 01:42:13 +03:00
|
|
|
CCGraphBuilder::NoteJSChild(const JS::GCCellPtr& aChild) {
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!aChild) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
|
|
|
}
|
2008-06-21 19:25:29 +04:00
|
|
|
|
2017-09-12 18:59:57 +03:00
|
|
|
++mNoteChildCount;
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
nsCString edgeName;
|
|
|
|
if (MOZ_UNLIKELY(WantDebugInfo())) {
|
|
|
|
edgeName.Assign(mNextEdgeName);
|
|
|
|
mNextEdgeName.Truncate();
|
|
|
|
}
|
2007-01-09 04:33:02 +03:00
|
|
|
|
2014-12-10 03:22:51 +03:00
|
|
|
if (GCThingIsGrayCCThing(aChild) || MOZ_UNLIKELY(WantAllTraces())) {
|
2015-06-02 00:11:06 +03:00
|
|
|
if (JS::Zone* zone = MergeZone(aChild)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
NoteChild(zone, mJSZoneParticipant, edgeName);
|
|
|
|
} else {
|
2014-12-05 20:38:34 +03:00
|
|
|
NoteChild(aChild.asCell(), mJSParticipant, edgeName);
|
2011-03-29 00:05:48 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2008-03-18 02:11:08 +03:00
|
|
|
NS_IMETHODIMP_(void)
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraphBuilder::NoteNextEdgeName(const char* aName) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (WantDebugInfo()) {
|
2014-05-13 21:41:38 +04:00
|
|
|
mNextEdgeName = aName;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2009-07-09 05:10:29 +04:00
|
|
|
}
|
2008-03-18 02:11:08 +03:00
|
|
|
|
2014-12-10 03:22:51 +03:00
|
|
|
PtrInfo* CCGraphBuilder::AddWeakMapNode(JS::GCCellPtr aNode) {
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_ASSERT(aNode, "Weak map node should be non-null.");
|
2011-11-24 16:35:56 +04:00
|
|
|
|
2014-12-10 03:22:51 +03:00
|
|
|
if (!GCThingIsGrayCCThing(aNode) && !WantAllTraces()) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return nullptr;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2011-11-24 16:35:56 +04:00
|
|
|
|
2015-06-02 00:11:06 +03:00
|
|
|
if (JS::Zone* zone = MergeZone(aNode)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return AddNode(zone, mJSZoneParticipant);
|
|
|
|
}
|
2014-12-10 03:22:51 +03:00
|
|
|
return AddNode(aNode.asCell(), mJSParticipant);
|
|
|
|
}
|
|
|
|
|
|
|
|
PtrInfo* CCGraphBuilder::AddWeakMapNode(JSObject* aObject) {
|
|
|
|
return AddWeakMapNode(JS::GCCellPtr(aObject));
|
2011-11-24 16:35:56 +04:00
|
|
|
}
|
|
|
|
|
2011-11-24 16:35:56 +04:00
|
|
|
NS_IMETHODIMP_(void)
|
2014-12-05 20:38:32 +03:00
|
|
|
CCGraphBuilder::NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey,
|
|
|
|
JSObject* aKdelegate, JS::GCCellPtr aVal) {
|
2014-05-05 21:30:39 +04:00
|
|
|
// Don't try to optimize away the entry here, as we've already attempted to
|
|
|
|
// do that in TraceWeakMapping in nsXPConnect.
|
2014-05-13 21:41:38 +04:00
|
|
|
WeakMapping* mapping = mGraph.mWeakMaps.AppendElement();
|
|
|
|
mapping->mMap = aMap ? AddWeakMapNode(aMap) : nullptr;
|
2014-12-10 03:22:51 +03:00
|
|
|
mapping->mKey = aKey ? AddWeakMapNode(aKey) : nullptr;
|
2014-05-13 21:41:38 +04:00
|
|
|
mapping->mKeyDelegate =
|
|
|
|
aKdelegate ? AddWeakMapNode(aKdelegate) : mapping->mKey;
|
2014-12-10 03:22:51 +03:00
|
|
|
mapping->mVal = aVal ? AddWeakMapNode(aVal) : nullptr;
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger) {
|
2016-03-05 01:11:37 +03:00
|
|
|
mLogger->NoteWeakMapEntry((uint64_t)aMap, aKey ? aKey.unsafeAsInteger() : 0,
|
|
|
|
(uint64_t)aKdelegate,
|
|
|
|
aVal ? aVal.unsafeAsInteger() : 0);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2011-11-24 16:35:56 +04:00
|
|
|
}
|
|
|
|
|
2014-07-09 23:31:00 +04:00
|
|
|
static bool AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
|
2014-05-13 21:41:38 +04:00
|
|
|
nsCycleCollectionParticipant* aParti) {
|
2014-12-20 21:35:23 +03:00
|
|
|
return aBuilder.AddPurpleRoot(aRoot, aParti);
|
2012-08-24 20:50:06 +04:00
|
|
|
}
|
|
|
|
|
2012-03-06 01:48:04 +04:00
|
|
|
// MayHaveChild() will be false after a Traverse if the object does
|
|
|
|
// not have any children the CC will visit.
|
|
|
|
class ChildFinder : public nsCycleCollectionTraversalCallback {
|
|
|
|
public:
|
2014-05-13 21:41:38 +04:00
|
|
|
ChildFinder() : mMayHaveChild(false) {}
|
2014-05-05 21:30:39 +04:00
|
|
|
|
|
|
|
// The logic of the Note*Child functions must mirror that of their
|
2014-07-09 23:31:00 +04:00
|
|
|
// respective functions in CCGraphBuilder.
|
2017-11-06 06:37:28 +03:00
|
|
|
NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild) override;
|
2014-05-13 21:41:38 +04:00
|
|
|
NS_IMETHOD_(void)
|
2017-11-06 06:37:28 +03:00
|
|
|
NoteNativeChild(void* aChild, nsCycleCollectionParticipant* aHelper) override;
|
|
|
|
NS_IMETHOD_(void) NoteJSChild(const JS::GCCellPtr& aThing) override;
|
2014-05-13 21:41:38 +04:00
|
|
|
|
|
|
|
NS_IMETHOD_(void)
|
|
|
|
DescribeRefCountedNode(nsrefcnt aRefcount, const char* aObjname) override {}
|
|
|
|
NS_IMETHOD_(void)
|
|
|
|
DescribeGCedNode(bool aIsMarked, const char* aObjname,
|
2017-11-06 06:37:28 +03:00
|
|
|
uint64_t aCompartmentAddress) override {}
|
|
|
|
NS_IMETHOD_(void) NoteNextEdgeName(const char* aName) override {}
|
2014-05-05 21:30:39 +04:00
|
|
|
bool MayHaveChild() { return mMayHaveChild; }
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2012-03-06 01:48:04 +04:00
|
|
|
private:
|
2014-05-05 21:30:39 +04:00
|
|
|
bool mMayHaveChild;
|
2012-03-06 01:48:04 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
NS_IMETHODIMP_(void)
|
2014-05-13 21:41:38 +04:00
|
|
|
ChildFinder::NoteXPCOMChild(nsISupports* aChild) {
|
|
|
|
if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
|
|
|
nsXPCOMCycleCollectionParticipant* cp;
|
|
|
|
ToParticipant(aChild, &cp);
|
|
|
|
if (cp && !cp->CanSkip(aChild, true)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
mMayHaveChild = true;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2012-07-26 19:16:28 +04:00
|
|
|
}
|
2012-03-06 01:48:04 +04:00
|
|
|
|
|
|
|
NS_IMETHODIMP_(void)
|
2014-05-13 21:41:38 +04:00
|
|
|
ChildFinder::NoteNativeChild(void* aChild,
|
|
|
|
nsCycleCollectionParticipant* aHelper) {
|
2014-12-13 00:42:21 +03:00
|
|
|
if (!aChild) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
MOZ_ASSERT(aHelper, "Native child must have a participant");
|
|
|
|
if (!aHelper->CanSkip(aChild, true)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
mMayHaveChild = true;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2012-07-26 19:16:28 +04:00
|
|
|
}
|
2012-03-06 01:48:04 +04:00
|
|
|
|
|
|
|
NS_IMETHODIMP_(void)
|
2016-09-24 01:42:13 +03:00
|
|
|
ChildFinder::NoteJSChild(const JS::GCCellPtr& aChild) {
|
|
|
|
if (aChild && JS::GCThingIsMarkedGray(aChild)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
mMayHaveChild = true;
|
|
|
|
}
|
2012-07-26 19:16:28 +04:00
|
|
|
}
|
2012-03-06 01:48:04 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
static bool MayHaveChild(void* aObj, nsCycleCollectionParticipant* aCp) {
|
2014-05-05 21:30:39 +04:00
|
|
|
ChildFinder cf;
|
2017-01-03 22:46:49 +03:00
|
|
|
aCp->TraverseNativeAndJS(aObj, cf);
|
2014-05-05 21:30:39 +04:00
|
|
|
return cf.MayHaveChild();
|
2012-03-06 01:48:04 +04:00
|
|
|
}
|
|
|
|
|
2014-01-15 00:23:59 +04:00
|
|
|
// JSPurpleBuffer keeps references to GCThings which might affect the
|
|
|
|
// next cycle collection. It is owned only by itself and during unlink its
|
|
|
|
// self reference is broken down and the object ends up killing itself.
|
|
|
|
// If GC happens before CC, references to GCthings and the self reference are
|
|
|
|
// removed.
|
|
|
|
class JSPurpleBuffer {
|
2014-07-01 02:11:53 +04:00
|
|
|
~JSPurpleBuffer() {
|
|
|
|
MOZ_ASSERT(mValues.IsEmpty());
|
|
|
|
MOZ_ASSERT(mObjects.IsEmpty());
|
|
|
|
}
|
|
|
|
|
2014-01-15 00:23:59 +04:00
|
|
|
public:
|
2015-10-18 08:24:48 +03:00
|
|
|
explicit JSPurpleBuffer(RefPtr<JSPurpleBuffer>& aReferenceToThis)
|
2014-05-05 21:30:39 +04:00
|
|
|
: mReferenceToThis(aReferenceToThis),
|
2014-12-09 01:45:13 +03:00
|
|
|
mValues(kSegmentSize),
|
|
|
|
mObjects(kSegmentSize) {
|
2014-05-05 21:30:39 +04:00
|
|
|
mReferenceToThis = this;
|
|
|
|
mozilla::HoldJSObjects(this);
|
|
|
|
}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void Destroy() {
|
2018-05-28 23:23:45 +03:00
|
|
|
RefPtr<JSPurpleBuffer> referenceToThis;
|
|
|
|
mReferenceToThis.swap(referenceToThis);
|
2014-05-05 21:30:39 +04:00
|
|
|
mValues.Clear();
|
|
|
|
mObjects.Clear();
|
|
|
|
mozilla::DropJSObjects(this);
|
|
|
|
}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer)
|
|
|
|
NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer)
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<JSPurpleBuffer>& mReferenceToThis;
|
2014-08-01 01:43:45 +04:00
|
|
|
|
|
|
|
// These are raw pointers instead of Heap<T> because we only need Heap<T> for
|
|
|
|
// pointers which may point into the nursery. The purple buffer never contains
|
|
|
|
// pointers to the nursery because nursery gcthings can never be gray and only
|
|
|
|
// gray things can be inserted into the purple buffer.
|
2014-12-09 01:45:13 +03:00
|
|
|
static const size_t kSegmentSize = 512;
|
|
|
|
SegmentedVector<JS::Value, kSegmentSize, InfallibleAllocPolicy> mValues;
|
|
|
|
SegmentedVector<JSObject*, kSegmentSize, InfallibleAllocPolicy> mObjects;
|
2014-01-15 00:23:59 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer)
|
|
|
|
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer)
|
2014-05-05 21:30:39 +04:00
|
|
|
tmp->Destroy();
|
2014-01-15 00:23:59 +04:00
|
|
|
NS_IMPL_CYCLE_COLLECTION_UNLINK_END
|
|
|
|
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer)
|
2014-05-05 21:30:39 +04:00
|
|
|
CycleCollectionNoteChild(cb, tmp, "self");
|
2014-01-15 00:23:59 +04:00
|
|
|
NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
|
|
|
|
|
2014-12-09 01:45:13 +03:00
|
|
|
#define NS_TRACE_SEGMENTED_ARRAY(_field, _type) \
|
|
|
|
{ \
|
|
|
|
for (auto iter = tmp->_field.Iter(); !iter.Done(); iter.Next()) { \
|
|
|
|
js::gc::CallTraceCallbackOnNonHeap<_type, TraceCallbacks>( \
|
|
|
|
&iter.Get(), aCallbacks, #_field, aClosure); \
|
|
|
|
} \
|
2014-08-01 01:43:45 +04:00
|
|
|
}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer)
|
2014-08-01 01:43:45 +04:00
|
|
|
NS_TRACE_SEGMENTED_ARRAY(mValues, JS::Value)
|
|
|
|
NS_TRACE_SEGMENTED_ARRAY(mObjects, JSObject*)
|
2014-01-15 00:23:59 +04:00
|
|
|
NS_IMPL_CYCLE_COLLECTION_TRACE_END
|
|
|
|
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_ROOT_NATIVE(JSPurpleBuffer, AddRef)
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_UNROOT_NATIVE(JSPurpleBuffer, Release)
|
|
|
|
|
|
|
|
class SnowWhiteKiller : public TraceCallbacks {
|
2014-11-10 01:57:09 +03:00
|
|
|
struct SnowWhiteObject {
|
|
|
|
void* mPointer;
|
|
|
|
nsCycleCollectionParticipant* mParticipant;
|
|
|
|
nsCycleCollectingAutoRefCnt* mRefCnt;
|
|
|
|
};
|
|
|
|
|
|
|
|
// Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
|
2014-12-09 01:45:13 +03:00
|
|
|
static const size_t kSegmentSize = sizeof(void*) * 1024;
|
|
|
|
typedef SegmentedVector<SnowWhiteObject, kSegmentSize, InfallibleAllocPolicy>
|
|
|
|
ObjectsVector;
|
2014-11-10 01:57:09 +03:00
|
|
|
|
2013-04-30 21:41:23 +04:00
|
|
|
public:
|
2018-08-08 09:14:58 +03:00
|
|
|
SnowWhiteKiller(nsCycleCollector* aCollector, js::SliceBudget* aBudget)
|
2014-05-05 21:30:39 +04:00
|
|
|
: mCollector(aCollector),
|
2014-12-09 01:45:13 +03:00
|
|
|
mObjects(kSegmentSize),
|
2018-08-08 09:14:58 +03:00
|
|
|
mBudget(aBudget),
|
|
|
|
mSawSnowWhiteObjects(false) {
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(mCollector, "Calling SnowWhiteKiller after nsCC went away");
|
|
|
|
}
|
2013-07-09 21:30:58 +04:00
|
|
|
|
2018-08-08 09:14:58 +03:00
|
|
|
explicit SnowWhiteKiller(nsCycleCollector* aCollector)
|
|
|
|
: SnowWhiteKiller(aCollector, nullptr) {}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
~SnowWhiteKiller() {
|
2014-12-09 01:45:13 +03:00
|
|
|
for (auto iter = mObjects.Iter(); !iter.Done(); iter.Next()) {
|
|
|
|
SnowWhiteObject& o = iter.Get();
|
2018-08-08 09:14:58 +03:00
|
|
|
MaybeKillObject(o);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-06-11 18:15:07 +03:00
|
|
|
private:
|
2018-08-08 09:14:58 +03:00
|
|
|
void MaybeKillObject(SnowWhiteObject& aObject) {
|
|
|
|
if (!aObject.mRefCnt->get() && !aObject.mRefCnt->IsInPurpleBuffer()) {
|
|
|
|
mCollector->RemoveObjectFromGraph(aObject.mPointer);
|
|
|
|
aObject.mRefCnt->stabilizeForDeletion();
|
|
|
|
{
|
|
|
|
JS::AutoEnterCycleCollection autocc(mCollector->Runtime()->Runtime());
|
|
|
|
aObject.mParticipant->Trace(aObject.mPointer, *this, nullptr);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2018-08-08 09:14:58 +03:00
|
|
|
aObject.mParticipant->DeleteCycleCollectable(aObject.mPointer);
|
2013-07-09 21:30:58 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-07-09 21:30:58 +04:00
|
|
|
|
2019-06-11 18:15:07 +03:00
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
|
2018-11-05 23:54:38 +03:00
|
|
|
// The cycle collector does not collect anything when recording/replaying.
|
|
|
|
if (recordreplay::IsRecordingOrReplaying()) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mBudget) {
|
2018-08-08 09:14:58 +03:00
|
|
|
if (mBudget->isOverBudget()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
mBudget->step();
|
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
|
|
|
|
if (!aEntry->mRefCnt->get()) {
|
2018-08-08 09:14:58 +03:00
|
|
|
mSawSnowWhiteObjects = true;
|
2014-05-13 21:41:38 +04:00
|
|
|
void* o = aEntry->mObject;
|
|
|
|
nsCycleCollectionParticipant* cp = aEntry->mParticipant;
|
2017-07-29 01:24:17 +03:00
|
|
|
ToParticipant(o, &cp);
|
2014-05-05 21:30:39 +04:00
|
|
|
SnowWhiteObject swo = {o, cp, aEntry->mRefCnt};
|
2018-08-08 09:14:58 +03:00
|
|
|
if (!mBudget) {
|
|
|
|
mObjects.InfallibleAppend(swo);
|
|
|
|
}
|
2014-11-10 01:57:09 +03:00
|
|
|
aBuffer.Remove(aEntry);
|
2018-08-08 09:14:58 +03:00
|
|
|
if (mBudget) {
|
|
|
|
MaybeKillObject(swo);
|
|
|
|
}
|
2013-07-09 21:30:58 +04:00
|
|
|
}
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-07-09 21:30:58 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
bool HasSnowWhiteObjects() const { return !mObjects.IsEmpty(); }
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2018-08-08 09:14:58 +03:00
|
|
|
bool SawSnowWhiteObjects() const { return mSawSnowWhiteObjects; }
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
virtual void Trace(JS::Heap<JS::Value>* aValue, const char* aName,
|
2015-12-31 16:21:49 +03:00
|
|
|
void* aClosure) const override {
|
2016-02-07 20:08:55 +03:00
|
|
|
const JS::Value& val = aValue->unbarrieredGet();
|
2016-12-26 18:40:21 +03:00
|
|
|
if (val.isGCThing() && ValueIsGrayCCThing(val)) {
|
2016-02-07 20:08:55 +03:00
|
|
|
MOZ_ASSERT(!js::gc::IsInsideNursery(val.toGCThing()));
|
|
|
|
mCollector->GetJSPurpleBuffer()->mValues.InfallibleAppend(val);
|
2014-01-15 00:23:59 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
virtual void Trace(JS::Heap<jsid>* aId, const char* aName,
|
2015-12-31 16:21:49 +03:00
|
|
|
void* aClosure) const override {}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-08-25 23:17:15 +04:00
|
|
|
void AppendJSObjectToPurpleBuffer(JSObject* obj) const {
|
2014-12-10 03:22:51 +03:00
|
|
|
if (obj && JS::ObjectIsMarkedGray(obj)) {
|
2014-12-14 19:27:52 +03:00
|
|
|
MOZ_ASSERT(JS::ObjectIsTenured(obj));
|
2014-12-09 01:45:13 +03:00
|
|
|
mCollector->GetJSPurpleBuffer()->mObjects.InfallibleAppend(obj);
|
2014-08-01 01:43:45 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
virtual void Trace(JS::Heap<JSObject*>* aObject, const char* aName,
|
2015-12-31 16:21:49 +03:00
|
|
|
void* aClosure) const override {
|
2016-02-07 20:08:55 +03:00
|
|
|
AppendJSObjectToPurpleBuffer(aObject->unbarrieredGet());
|
2015-12-31 16:21:49 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
virtual void Trace(JSObject** aObject, const char* aName,
|
|
|
|
void* aClosure) const override {
|
2014-08-01 01:43:45 +04:00
|
|
|
AppendJSObjectToPurpleBuffer(*aObject);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
virtual void Trace(JS::TenuredHeap<JSObject*>* aObject, const char* aName,
|
2015-12-31 16:21:49 +03:00
|
|
|
void* aClosure) const override {
|
2016-10-07 14:58:37 +03:00
|
|
|
AppendJSObjectToPurpleBuffer(aObject->unbarrieredGetPtr());
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2014-03-03 20:53:42 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
virtual void Trace(JS::Heap<JSString*>* aString, const char* aName,
|
2015-12-31 16:21:49 +03:00
|
|
|
void* aClosure) const override {}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
virtual void Trace(JS::Heap<JSScript*>* aScript, const char* aName,
|
2015-12-31 16:21:49 +03:00
|
|
|
void* aClosure) const override {}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
virtual void Trace(JS::Heap<JSFunction*>* aFunction, const char* aName,
|
2015-12-31 16:21:49 +03:00
|
|
|
void* aClosure) const override {}
|
2014-01-15 00:23:59 +04:00
|
|
|
|
2013-07-09 21:30:58 +04:00
|
|
|
private:
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<nsCycleCollector> mCollector;
|
2014-12-09 01:45:13 +03:00
|
|
|
ObjectsVector mObjects;
|
2018-08-08 09:14:58 +03:00
|
|
|
js::SliceBudget* mBudget;
|
|
|
|
bool mSawSnowWhiteObjects;
|
2013-07-09 21:30:58 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
class RemoveSkippableVisitor : public SnowWhiteKiller {
|
|
|
|
public:
|
2014-05-05 21:30:39 +04:00
|
|
|
RemoveSkippableVisitor(nsCycleCollector* aCollector, js::SliceBudget& aBudget,
|
2016-01-06 06:32:28 +03:00
|
|
|
bool aRemoveChildlessNodes,
|
2014-05-05 21:30:39 +04:00
|
|
|
bool aAsyncSnowWhiteFreeing,
|
|
|
|
CC_ForgetSkippableCallback aCb)
|
2016-01-06 06:32:28 +03:00
|
|
|
: SnowWhiteKiller(aCollector),
|
2017-06-30 13:44:59 +03:00
|
|
|
mBudget(aBudget),
|
2014-05-13 21:41:38 +04:00
|
|
|
mRemoveChildlessNodes(aRemoveChildlessNodes),
|
|
|
|
mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing),
|
|
|
|
mDispatchedDeferredDeletion(false),
|
|
|
|
mCallback(aCb) {}
|
2018-11-30 13:46:48 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
~RemoveSkippableVisitor() {
|
|
|
|
// Note, we must call the callback before SnowWhiteKiller calls
|
|
|
|
// DeleteCycleCollectable!
|
|
|
|
if (mCallback) {
|
|
|
|
mCallback();
|
2013-07-09 21:30:58 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
if (HasSnowWhiteObjects()) {
|
|
|
|
// Effectively a continuation.
|
|
|
|
nsCycleCollector_dispatchDeferredDeletion(true);
|
|
|
|
}
|
|
|
|
}
|
2013-07-09 21:30:58 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
|
2017-06-30 13:44:59 +03:00
|
|
|
if (mBudget.isOverBudget()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
// CanSkip calls can be a bit slow, so increase the likelihood that
|
|
|
|
// isOverBudget actually checks whether we're over the time budget.
|
|
|
|
mBudget.step(5);
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(aEntry->mObject, "null mObject in purple buffer");
|
|
|
|
if (!aEntry->mRefCnt->get()) {
|
|
|
|
if (!mAsyncSnowWhiteFreeing) {
|
|
|
|
SnowWhiteKiller::Visit(aBuffer, aEntry);
|
|
|
|
} else if (!mDispatchedDeferredDeletion) {
|
|
|
|
mDispatchedDeferredDeletion = true;
|
|
|
|
nsCycleCollector_dispatchDeferredDeletion(false);
|
|
|
|
}
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2012-01-14 20:58:05 +04:00
|
|
|
}
|
2014-05-13 21:41:38 +04:00
|
|
|
void* o = aEntry->mObject;
|
|
|
|
nsCycleCollectionParticipant* cp = aEntry->mParticipant;
|
2017-07-29 01:24:17 +03:00
|
|
|
ToParticipant(o, &cp);
|
2014-05-05 21:30:39 +04:00
|
|
|
if (aEntry->mRefCnt->IsPurple() && !cp->CanSkip(o, false) &&
|
|
|
|
(!mRemoveChildlessNodes || MayHaveChild(o, cp))) {
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
aBuffer.Remove(aEntry);
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-04-30 21:41:23 +04:00
|
|
|
|
|
|
|
private:
|
2017-06-30 13:44:59 +03:00
|
|
|
js::SliceBudget& mBudget;
|
2014-05-05 21:30:39 +04:00
|
|
|
bool mRemoveChildlessNodes;
|
|
|
|
bool mAsyncSnowWhiteFreeing;
|
|
|
|
bool mDispatchedDeferredDeletion;
|
|
|
|
CC_ForgetSkippableCallback mCallback;
|
2013-04-30 21:41:23 +04:00
|
|
|
};
|
|
|
|
|
2013-07-27 14:48:45 +04:00
|
|
|
void nsPurpleBuffer::RemoveSkippable(nsCycleCollector* aCollector,
|
2017-06-30 13:44:59 +03:00
|
|
|
js::SliceBudget& aBudget,
|
2013-07-27 14:48:45 +04:00
|
|
|
bool aRemoveChildlessNodes,
|
|
|
|
bool aAsyncSnowWhiteFreeing,
|
2013-07-09 21:30:58 +04:00
|
|
|
CC_ForgetSkippableCallback aCb) {
|
2017-06-30 13:44:59 +03:00
|
|
|
RemoveSkippableVisitor visitor(aCollector, aBudget, aRemoveChildlessNodes,
|
2014-05-05 21:30:39 +04:00
|
|
|
aAsyncSnowWhiteFreeing, aCb);
|
|
|
|
VisitEntries(visitor);
|
2013-07-09 21:30:58 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
bool nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer) {
|
2014-05-05 21:30:39 +04:00
|
|
|
CheckThreadSafety();
|
2013-09-07 00:41:42 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mFreeingSnowWhite) {
|
|
|
|
return false;
|
|
|
|
}
|
2014-02-12 02:56:44 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
AutoRestore<bool> ar(mFreeingSnowWhite);
|
|
|
|
mFreeingSnowWhite = true;
|
2014-02-12 02:56:44 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
bool hadSnowWhiteObjects = false;
|
|
|
|
do {
|
2016-01-06 06:32:28 +03:00
|
|
|
SnowWhiteKiller visitor(this);
|
2014-05-05 21:30:39 +04:00
|
|
|
mPurpleBuf.VisitEntries(visitor);
|
|
|
|
hadSnowWhiteObjects = hadSnowWhiteObjects || visitor.HasSnowWhiteObjects();
|
|
|
|
if (!visitor.HasSnowWhiteObjects()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} while (aUntilNoSWInPurpleBuffer);
|
|
|
|
return hadSnowWhiteObjects;
|
2013-07-09 21:30:58 +04:00
|
|
|
}
|
|
|
|
|
2018-08-08 09:14:58 +03:00
|
|
|
bool nsCycleCollector::FreeSnowWhiteWithBudget(js::SliceBudget& aBudget) {
|
|
|
|
CheckThreadSafety();
|
|
|
|
|
|
|
|
if (mFreeingSnowWhite) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
AutoRestore<bool> ar(mFreeingSnowWhite);
|
|
|
|
mFreeingSnowWhite = true;
|
|
|
|
|
|
|
|
SnowWhiteKiller visitor(this, &aBudget);
|
|
|
|
mPurpleBuf.VisitEntries(visitor);
|
|
|
|
return visitor.SawSnowWhiteObjects();
|
2018-11-30 13:46:48 +03:00
|
|
|
;
|
2018-08-08 09:14:58 +03:00
|
|
|
}
|
|
|
|
|
2017-06-30 13:44:59 +03:00
|
|
|
void nsCycleCollector::ForgetSkippable(js::SliceBudget& aBudget,
|
|
|
|
bool aRemoveChildlessNodes,
|
2013-07-27 14:48:45 +04:00
|
|
|
bool aAsyncSnowWhiteFreeing) {
|
2014-05-05 21:30:39 +04:00
|
|
|
CheckThreadSafety();
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2019-03-29 18:52:14 +03:00
|
|
|
if (mFreeingSnowWhite) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-06-11 00:05:53 +03:00
|
|
|
mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
|
|
|
|
if (NS_IsMainThread()) {
|
2015-11-23 18:50:56 +03:00
|
|
|
marker.emplace("nsCycleCollector::ForgetSkippable",
|
|
|
|
MarkerStackRequest::NO_STACK);
|
2015-06-11 00:05:53 +03:00
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// If we remove things from the purple buffer during graph building, we may
|
|
|
|
// lose track of an object that was mutated during graph building.
|
2015-09-25 20:43:21 +03:00
|
|
|
MOZ_ASSERT(IsIdle());
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2018-07-30 18:48:17 +03:00
|
|
|
// The cycle collector does not collect anything when recording/replaying.
|
|
|
|
if (recordreplay::IsRecordingOrReplaying()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
if (mCCJSRuntime) {
|
|
|
|
mCCJSRuntime->PrepareForForgetSkippable();
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_ASSERT(
|
|
|
|
!mScanInProgress,
|
|
|
|
"Don't forget skippable or free snow-white while scan is in progress.");
|
2017-06-30 13:44:59 +03:00
|
|
|
mPurpleBuf.RemoveSkippable(this, aBudget, aRemoveChildlessNodes,
|
2014-05-05 21:30:39 +04:00
|
|
|
aAsyncSnowWhiteFreeing, mForgetSkippableCB);
|
2012-01-14 20:58:05 +04:00
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_NEVER_INLINE void nsCycleCollector::MarkRoots(SliceBudget& aBudget) {
|
2016-10-27 13:03:53 +03:00
|
|
|
JS::AutoAssertNoGC nogc;
|
2014-05-05 21:30:39 +04:00
|
|
|
TimeLog timeLog;
|
|
|
|
AutoRestore<bool> ar(mScanInProgress);
|
2016-12-09 00:03:42 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!mScanInProgress);
|
2014-05-05 21:30:39 +04:00
|
|
|
mScanInProgress = true;
|
|
|
|
MOZ_ASSERT(mIncrementalPhase == GraphBuildingPhase);
|
2013-12-03 22:47:47 +04:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
|
2014-12-20 21:35:23 +03:00
|
|
|
bool doneBuilding = mBuilder->BuildGraph(aBudget);
|
2013-12-03 22:47:47 +04:00
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
if (!doneBuilding) {
|
2014-05-05 21:30:39 +04:00
|
|
|
timeLog.Checkpoint("MarkRoots()");
|
|
|
|
return;
|
|
|
|
}
|
2013-05-25 02:00:36 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mBuilder = nullptr;
|
|
|
|
mIncrementalPhase = ScanAndCollectWhitePhase;
|
|
|
|
timeLog.Checkpoint("MarkRoots()");
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// Bacon & Rajan's |ScanRoots| routine.
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2010-01-12 19:51:39 +03:00
|
|
|
struct ScanBlackVisitor {
|
2014-05-13 21:41:38 +04:00
|
|
|
ScanBlackVisitor(uint32_t& aWhiteNodeCount, bool& aFailed)
|
2014-05-05 21:30:39 +04:00
|
|
|
: mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed) {}
|
2008-02-15 16:12:55 +03:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
bool ShouldVisitNode(PtrInfo const* aPi) { return aPi->mColor != black; }
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_NEVER_INLINE void VisitNode(PtrInfo* aPi) {
|
|
|
|
if (aPi->mColor == white) {
|
2014-05-05 21:30:39 +04:00
|
|
|
--mWhiteNodeCount;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
|
|
|
aPi->mColor = black;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2008-02-15 16:12:55 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
void Failed() { mFailed = true; }
|
2013-05-29 04:42:13 +04:00
|
|
|
|
|
|
|
private:
|
2014-05-13 21:41:38 +04:00
|
|
|
uint32_t& mWhiteNodeCount;
|
|
|
|
bool& mFailed;
|
2007-01-05 01:31:26 +03:00
|
|
|
};
|
|
|
|
|
2014-05-07 04:25:27 +04:00
|
|
|
static void FloodBlackNode(uint32_t& aWhiteNodeCount, bool& aFailed,
|
|
|
|
PtrInfo* aPi) {
|
2014-08-25 23:17:15 +04:00
|
|
|
GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(aWhiteNodeCount, aFailed))
|
|
|
|
.Walk(aPi);
|
|
|
|
MOZ_ASSERT(aPi->mColor == black || !aPi->WasTraversed(),
|
|
|
|
"FloodBlackNode should make aPi black");
|
2014-05-07 04:25:27 +04:00
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2011-11-24 16:35:57 +04:00
|
|
|
// Iterate over the WeakMaps. If we mark anything while iterating
|
|
|
|
// over the WeakMaps, we must iterate over all of the WeakMaps again.
|
|
|
|
void nsCycleCollector::ScanWeakMaps() {
|
2014-05-05 21:30:39 +04:00
|
|
|
bool anyChanged;
|
|
|
|
bool failed = false;
|
|
|
|
do {
|
|
|
|
anyChanged = false;
|
|
|
|
for (uint32_t i = 0; i < mGraph.mWeakMaps.Length(); i++) {
|
2014-05-13 21:41:38 +04:00
|
|
|
WeakMapping* wm = &mGraph.mWeakMaps[i];
|
2014-05-05 21:30:39 +04:00
|
|
|
|
|
|
|
// If any of these are null, the original object was marked black.
|
|
|
|
uint32_t mColor = wm->mMap ? wm->mMap->mColor : black;
|
|
|
|
uint32_t kColor = wm->mKey ? wm->mKey->mColor : black;
|
|
|
|
uint32_t kdColor = wm->mKeyDelegate ? wm->mKeyDelegate->mColor : black;
|
|
|
|
uint32_t vColor = wm->mVal ? wm->mVal->mColor : black;
|
|
|
|
|
|
|
|
MOZ_ASSERT(mColor != grey, "Uncolored weak map");
|
|
|
|
MOZ_ASSERT(kColor != grey, "Uncolored weak map key");
|
|
|
|
MOZ_ASSERT(kdColor != grey, "Uncolored weak map key delegate");
|
|
|
|
MOZ_ASSERT(vColor != grey, "Uncolored weak map value");
|
|
|
|
|
|
|
|
if (mColor == black && kColor != black && kdColor == black) {
|
2014-05-07 04:25:27 +04:00
|
|
|
FloodBlackNode(mWhiteNodeCount, failed, wm->mKey);
|
2014-05-05 21:30:39 +04:00
|
|
|
anyChanged = true;
|
|
|
|
}
|
2013-05-29 04:42:13 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mColor == black && kColor == black && vColor != black) {
|
2014-05-07 04:25:27 +04:00
|
|
|
FloodBlackNode(mWhiteNodeCount, failed, wm->mVal);
|
2014-05-05 21:30:39 +04:00
|
|
|
anyChanged = true;
|
|
|
|
}
|
2013-05-29 04:42:13 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
} while (anyChanged);
|
|
|
|
|
|
|
|
if (failed) {
|
|
|
|
MOZ_ASSERT(false, "Ran out of memory in ScanWeakMaps");
|
|
|
|
CC_TELEMETRY(_OOM, true);
|
|
|
|
}
|
2011-11-24 16:35:57 +04:00
|
|
|
}
|
|
|
|
|
2013-12-18 07:29:57 +04:00
|
|
|
// Flood black from any objects in the purple buffer that are in the CC graph.
|
|
|
|
class PurpleScanBlackVisitor {
|
|
|
|
public:
|
2015-06-05 00:41:31 +03:00
|
|
|
PurpleScanBlackVisitor(CCGraph& aGraph, nsCycleCollectorLogger* aLogger,
|
2014-05-13 21:41:38 +04:00
|
|
|
uint32_t& aCount, bool& aFailed)
|
2015-06-05 00:41:31 +03:00
|
|
|
: mGraph(aGraph), mLogger(aLogger), mCount(aCount), mFailed(aFailed) {}
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
bool Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry) {
|
|
|
|
MOZ_ASSERT(aEntry->mObject,
|
|
|
|
"Entries with null mObject shouldn't be in the purple buffer.");
|
|
|
|
MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
|
|
|
|
"Snow-white objects shouldn't be in the purple buffer.");
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void* obj = aEntry->mObject;
|
2017-07-29 01:24:17 +03:00
|
|
|
|
|
|
|
MOZ_ASSERT(
|
|
|
|
aEntry->mParticipant ||
|
|
|
|
CanonicalizeXPCOMParticipant(static_cast<nsISupports*>(obj)) == obj,
|
|
|
|
"Suspect nsISupports pointer must be canonical");
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* pi = mGraph.FindNode(obj);
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!pi) {
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
MOZ_ASSERT(pi->mParticipant,
|
|
|
|
"No dead objects should be in the purple buffer.");
|
2015-06-05 00:41:31 +03:00
|
|
|
if (MOZ_UNLIKELY(mLogger)) {
|
|
|
|
mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
if (pi->mColor == black) {
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2013-12-18 07:29:57 +04:00
|
|
|
}
|
2014-05-07 04:25:27 +04:00
|
|
|
FloodBlackNode(mCount, mFailed, pi);
|
2017-06-30 13:44:59 +03:00
|
|
|
return true;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-12-18 07:29:57 +04:00
|
|
|
|
|
|
|
private:
|
2014-07-09 23:31:00 +04:00
|
|
|
CCGraph& mGraph;
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<nsCycleCollectorLogger> mLogger;
|
2014-05-13 21:41:38 +04:00
|
|
|
uint32_t& mCount;
|
|
|
|
bool& mFailed;
|
2013-12-18 07:29:57 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
// Objects that have been stored somewhere since the start of incremental graph
|
|
|
|
// building must be treated as live for this cycle collection, because we may
|
|
|
|
// not have accurate information about who holds references to them.
|
|
|
|
void nsCycleCollector::ScanIncrementalRoots() {
|
2014-05-05 21:30:39 +04:00
|
|
|
TimeLog timeLog;
|
|
|
|
|
|
|
|
// Reference counted objects:
|
|
|
|
// We cleared the purple buffer at the start of the current ICC, so if a
|
|
|
|
// refcounted object is purple, it may have been AddRef'd during the current
|
|
|
|
// ICC. (It may also have only been released.) If that is the case, we cannot
|
|
|
|
// be sure that the set of things pointing to the object in the CC graph
|
|
|
|
// is accurate. Therefore, for safety, we treat any purple objects as being
|
|
|
|
// live during the current CC. We don't remove anything from the purple
|
|
|
|
// buffer here, so these objects will be suspected and freed in the next CC
|
|
|
|
// if they are garbage.
|
|
|
|
bool failed = false;
|
2015-06-05 00:41:31 +03:00
|
|
|
PurpleScanBlackVisitor purpleScanBlackVisitor(mGraph, mLogger,
|
2014-08-25 23:17:15 +04:00
|
|
|
mWhiteNodeCount, failed);
|
2014-05-05 21:30:39 +04:00
|
|
|
mPurpleBuf.VisitEntries(purpleScanBlackVisitor);
|
|
|
|
timeLog.Checkpoint("ScanIncrementalRoots::fix purple");
|
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
bool hasJSRuntime = !!mCCJSRuntime;
|
2014-08-25 23:17:15 +04:00
|
|
|
nsCycleCollectionParticipant* jsParticipant =
|
2017-04-28 00:10:15 +03:00
|
|
|
hasJSRuntime ? mCCJSRuntime->GCThingParticipant() : nullptr;
|
2014-08-25 23:17:15 +04:00
|
|
|
nsCycleCollectionParticipant* zoneParticipant =
|
2017-04-28 00:10:15 +03:00
|
|
|
hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
|
2015-06-05 00:41:31 +03:00
|
|
|
bool hasLogger = !!mLogger;
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-07-01 03:18:46 +04:00
|
|
|
NodePool::Enumerator etor(mGraph.mNodes);
|
|
|
|
while (!etor.IsDone()) {
|
|
|
|
PtrInfo* pi = etor.GetNext();
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-07-01 03:18:46 +04:00
|
|
|
// As an optimization, if an object has already been determined to be live,
|
|
|
|
// don't consider it further. We can't do this if there is a listener,
|
|
|
|
// because the listener wants to know the complete set of incremental roots.
|
2015-06-05 00:41:31 +03:00
|
|
|
if (pi->mColor == black && MOZ_LIKELY(!hasLogger)) {
|
2014-07-01 03:18:46 +04:00
|
|
|
continue;
|
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-07-01 03:18:46 +04:00
|
|
|
// Garbage collected objects:
|
|
|
|
// If a GCed object was added to the graph with a refcount of zero, and is
|
|
|
|
// now marked black by the GC, it was probably gray before and was exposed
|
|
|
|
// to active JS, so it may have been stored somewhere, so it needs to be
|
|
|
|
// treated as live.
|
2017-04-28 00:10:15 +03:00
|
|
|
if (pi->IsGrayJS() && MOZ_LIKELY(hasJSRuntime)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
// If the object is still marked gray by the GC, nothing could have gotten
|
|
|
|
// hold of it, so it isn't an incremental root.
|
|
|
|
if (pi->mParticipant == jsParticipant) {
|
2016-04-30 04:10:07 +03:00
|
|
|
JS::GCCellPtr ptr(pi->mPointer, JS::GCThingTraceKind(pi->mPointer));
|
2014-12-10 03:22:51 +03:00
|
|
|
if (GCThingIsGrayCCThing(ptr)) {
|
2014-05-05 21:30:39 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
} else if (pi->mParticipant == zoneParticipant) {
|
2014-05-13 21:41:38 +04:00
|
|
|
JS::Zone* zone = static_cast<JS::Zone*>(pi->mPointer);
|
2014-05-05 21:30:39 +04:00
|
|
|
if (js::ZoneGlobalsAreAllGray(zone)) {
|
|
|
|
continue;
|
2013-12-18 07:29:57 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
} else {
|
|
|
|
MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
|
|
|
|
}
|
2014-07-01 03:18:47 +04:00
|
|
|
} else if (!pi->mParticipant && pi->WasTraversed()) {
|
|
|
|
// Dead traversed refcounted objects:
|
|
|
|
// If the object was traversed, it must have been alive at the start of
|
|
|
|
// the CC, and thus had a positive refcount. It is dead now, so its
|
|
|
|
// refcount must have decreased at some point during the CC. Therefore,
|
|
|
|
// it would be in the purple buffer if it wasn't dead, so treat it as an
|
|
|
|
// incremental root.
|
|
|
|
//
|
|
|
|
// This should not cause leaks because as the object died it should have
|
|
|
|
// released anything it held onto, which will add them to the purple
|
|
|
|
// buffer, which will cause them to be considered in the next CC.
|
2014-07-01 03:18:46 +04:00
|
|
|
} else {
|
|
|
|
continue;
|
|
|
|
}
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2014-07-01 03:18:46 +04:00
|
|
|
// At this point, pi must be an incremental root.
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-07-01 03:18:46 +04:00
|
|
|
// If there's a listener, tell it about this root. We don't bother with the
|
|
|
|
// optimization of skipping the Walk() if pi is black: it will just return
|
|
|
|
// without doing anything and there's no need to make this case faster.
|
2015-06-05 00:41:31 +03:00
|
|
|
if (MOZ_UNLIKELY(hasLogger) && pi->mPointer) {
|
2014-07-01 03:18:47 +04:00
|
|
|
// Dead objects aren't logged. See bug 1031370.
|
2015-06-05 00:41:31 +03:00
|
|
|
mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
|
2013-12-18 07:29:57 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-07-01 03:18:46 +04:00
|
|
|
FloodBlackNode(mWhiteNodeCount, failed, pi);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
|
2014-07-01 03:18:47 +04:00
|
|
|
timeLog.Checkpoint("ScanIncrementalRoots::fix nodes");
|
2014-07-01 03:18:46 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (failed) {
|
|
|
|
NS_ASSERTION(false, "Ran out of memory in ScanIncrementalRoots");
|
|
|
|
CC_TELEMETRY(_OOM, true);
|
|
|
|
}
|
2013-12-18 07:29:57 +04:00
|
|
|
}
|
|
|
|
|
2014-05-08 22:28:03 +04:00
|
|
|
// Mark nodes white and make sure their refcounts are ok.
|
|
|
|
// No nodes are marked black during this pass to ensure that refcount
|
|
|
|
// checking is run on all nodes not marked black by ScanIncrementalRoots.
|
|
|
|
void nsCycleCollector::ScanWhiteNodes(bool aFullySynchGraphBuild) {
|
|
|
|
NodePool::Enumerator nodeEnum(mGraph.mNodes);
|
|
|
|
while (!nodeEnum.IsDone()) {
|
|
|
|
PtrInfo* pi = nodeEnum.GetNext();
|
|
|
|
if (pi->mColor == black) {
|
|
|
|
// Incremental roots can be in a nonsensical state, so don't
|
|
|
|
// check them. This will miss checking nodes that are merely
|
|
|
|
// reachable from incremental roots.
|
|
|
|
MOZ_ASSERT(!aFullySynchGraphBuild,
|
|
|
|
"In a synch CC, no nodes should be marked black early on.");
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
MOZ_ASSERT(pi->mColor == grey);
|
|
|
|
|
2014-07-01 03:18:47 +04:00
|
|
|
if (!pi->WasTraversed()) {
|
|
|
|
// This node was deleted before it was traversed, so there's no reason
|
|
|
|
// to look at it.
|
|
|
|
MOZ_ASSERT(!pi->mParticipant,
|
|
|
|
"Live nodes should all have been traversed");
|
2014-05-08 22:28:03 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2014-05-14 20:45:50 +04:00
|
|
|
if (pi->mInternalRefs == pi->mRefCount || pi->IsGrayJS()) {
|
2014-05-08 22:28:03 +04:00
|
|
|
pi->mColor = white;
|
|
|
|
++mWhiteNodeCount;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2017-09-15 23:00:17 +03:00
|
|
|
pi->AnnotatedReleaseAssert(
|
|
|
|
pi->mInternalRefs <= pi->mRefCount,
|
|
|
|
"More references to an object than its refcount");
|
2014-05-08 22:28:03 +04:00
|
|
|
|
2015-05-15 20:33:09 +03:00
|
|
|
// This node will get marked black in the next pass.
|
2014-05-08 22:28:03 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Any remaining grey nodes that haven't already been deleted must be alive,
|
|
|
|
// so mark them and their children black. Any nodes that are black must have
|
|
|
|
// already had their children marked black, so there's no need to look at them
|
|
|
|
// again. This pass may turn some white nodes to black.
|
|
|
|
void nsCycleCollector::ScanBlackNodes() {
|
|
|
|
bool failed = false;
|
|
|
|
NodePool::Enumerator nodeEnum(mGraph.mNodes);
|
|
|
|
while (!nodeEnum.IsDone()) {
|
|
|
|
PtrInfo* pi = nodeEnum.GetNext();
|
2014-07-01 03:18:47 +04:00
|
|
|
if (pi->mColor == grey && pi->WasTraversed()) {
|
2014-05-08 22:28:03 +04:00
|
|
|
FloodBlackNode(mWhiteNodeCount, failed, pi);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (failed) {
|
|
|
|
NS_ASSERTION(false, "Ran out of memory in ScanBlackNodes");
|
|
|
|
CC_TELEMETRY(_OOM, true);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-12-18 07:29:57 +04:00
|
|
|
void nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild) {
|
2016-10-27 13:03:53 +03:00
|
|
|
JS::AutoAssertNoGC nogc;
|
2014-05-05 21:30:39 +04:00
|
|
|
AutoRestore<bool> ar(mScanInProgress);
|
2016-12-09 00:03:42 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!mScanInProgress);
|
2014-05-05 21:30:39 +04:00
|
|
|
mScanInProgress = true;
|
|
|
|
mWhiteNodeCount = 0;
|
|
|
|
MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
|
2008-02-15 16:12:55 +03:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
JS::AutoEnterCycleCollection autocc(Runtime()->Runtime());
|
2016-02-07 20:08:55 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!aFullySynchGraphBuild) {
|
|
|
|
ScanIncrementalRoots();
|
|
|
|
}
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
TimeLog timeLog;
|
2014-05-08 22:28:03 +04:00
|
|
|
ScanWhiteNodes(aFullySynchGraphBuild);
|
|
|
|
timeLog.Checkpoint("ScanRoots::ScanWhiteNodes");
|
2013-12-18 07:29:57 +04:00
|
|
|
|
2014-05-08 22:28:03 +04:00
|
|
|
ScanBlackNodes();
|
|
|
|
timeLog.Checkpoint("ScanRoots::ScanBlackNodes");
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// Scanning weak maps must be done last.
|
|
|
|
ScanWeakMaps();
|
|
|
|
timeLog.Checkpoint("ScanRoots::ScanWeakMaps");
|
2013-09-11 03:33:39 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger) {
|
|
|
|
mLogger->BeginResults();
|
2013-09-11 03:33:40 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
NodePool::Enumerator etor(mGraph.mNodes);
|
|
|
|
while (!etor.IsDone()) {
|
2014-05-13 21:41:38 +04:00
|
|
|
PtrInfo* pi = etor.GetNext();
|
2014-07-01 03:18:47 +04:00
|
|
|
if (!pi->WasTraversed()) {
|
2014-05-05 21:30:39 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
switch (pi->mColor) {
|
|
|
|
case black:
|
2014-05-14 20:45:50 +04:00
|
|
|
if (!pi->IsGrayJS() && !pi->IsBlackJS() &&
|
2014-05-05 21:30:39 +04:00
|
|
|
pi->mInternalRefs != pi->mRefCount) {
|
2015-06-05 00:41:31 +03:00
|
|
|
mLogger->DescribeRoot((uint64_t)pi->mPointer, pi->mInternalRefs);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case white:
|
2015-06-05 00:41:31 +03:00
|
|
|
mLogger->DescribeGarbage((uint64_t)pi->mPointer);
|
2014-05-05 21:30:39 +04:00
|
|
|
break;
|
|
|
|
case grey:
|
2014-07-01 03:18:47 +04:00
|
|
|
MOZ_ASSERT(false, "All traversed objects should be black or white");
|
2014-05-05 21:30:39 +04:00
|
|
|
break;
|
|
|
|
}
|
2013-09-11 03:33:39 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
mLogger->End();
|
|
|
|
mLogger = nullptr;
|
2014-05-05 21:30:39 +04:00
|
|
|
timeLog.Checkpoint("ScanRoots::listener");
|
|
|
|
}
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// Bacon & Rajan's |CollectWhite| routine, somewhat modified.
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2013-09-11 03:33:40 +04:00
|
|
|
bool nsCycleCollector::CollectWhite() {
|
2014-05-05 21:30:39 +04:00
|
|
|
// Explanation of "somewhat modified": we have no way to collect the
|
|
|
|
// set of whites "all at once", we have to ask each of them to drop
|
|
|
|
// their outgoing links and assume this will cause the garbage cycle
|
|
|
|
// to *mostly* self-destruct (except for the reference we continue
|
|
|
|
// to hold).
|
|
|
|
//
|
|
|
|
// To do this "safely" we must make sure that the white nodes we're
|
|
|
|
// operating on are stable for the duration of our operation. So we
|
|
|
|
// make 3 sets of calls to language runtimes:
|
|
|
|
//
|
|
|
|
// - Root(whites), which should pin the whites in memory.
|
|
|
|
// - Unlink(whites), which drops outgoing links on each white.
|
|
|
|
// - Unroot(whites), which returns the whites to normal GC.
|
|
|
|
|
2014-11-12 03:02:34 +03:00
|
|
|
// Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
|
2014-12-09 01:45:13 +03:00
|
|
|
static const size_t kSegmentSize = sizeof(void*) * 1024;
|
|
|
|
SegmentedVector<PtrInfo*, kSegmentSize, InfallibleAllocPolicy> whiteNodes(
|
|
|
|
kSegmentSize);
|
2014-05-05 21:30:39 +04:00
|
|
|
TimeLog timeLog;
|
|
|
|
|
|
|
|
MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
|
|
|
|
|
2014-11-12 03:02:34 +03:00
|
|
|
uint32_t numWhiteNodes = 0;
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t numWhiteGCed = 0;
|
2014-10-20 21:07:52 +04:00
|
|
|
uint32_t numWhiteJSZones = 0;
|
|
|
|
|
2014-05-01 03:06:25 +04:00
|
|
|
{
|
2016-10-27 13:03:53 +03:00
|
|
|
JS::AutoAssertNoGC nogc;
|
2017-04-28 00:10:15 +03:00
|
|
|
bool hasJSRuntime = !!mCCJSRuntime;
|
2014-05-01 03:06:25 +04:00
|
|
|
nsCycleCollectionParticipant* zoneParticipant =
|
2017-04-28 00:10:15 +03:00
|
|
|
hasJSRuntime ? mCCJSRuntime->ZoneParticipant() : nullptr;
|
2014-05-05 21:30:39 +04:00
|
|
|
|
2014-05-01 03:06:25 +04:00
|
|
|
NodePool::Enumerator etor(mGraph.mNodes);
|
|
|
|
while (!etor.IsDone()) {
|
|
|
|
PtrInfo* pinfo = etor.GetNext();
|
|
|
|
if (pinfo->mColor == white && pinfo->mParticipant) {
|
|
|
|
if (pinfo->IsGrayJS()) {
|
2017-04-28 00:10:15 +03:00
|
|
|
MOZ_ASSERT(mCCJSRuntime);
|
2014-05-01 03:06:25 +04:00
|
|
|
++numWhiteGCed;
|
|
|
|
JS::Zone* zone;
|
|
|
|
if (MOZ_UNLIKELY(pinfo->mParticipant == zoneParticipant)) {
|
|
|
|
++numWhiteJSZones;
|
|
|
|
zone = static_cast<JS::Zone*>(pinfo->mPointer);
|
|
|
|
} else {
|
|
|
|
JS::GCCellPtr ptr(pinfo->mPointer,
|
|
|
|
JS::GCThingTraceKind(pinfo->mPointer));
|
|
|
|
zone = JS::GetTenuredGCThingZone(ptr);
|
|
|
|
}
|
2017-04-28 00:10:15 +03:00
|
|
|
mCCJSRuntime->AddZoneWaitingForGC(zone);
|
2016-02-19 02:21:48 +03:00
|
|
|
} else {
|
2014-05-01 03:06:25 +04:00
|
|
|
whiteNodes.InfallibleAppend(pinfo);
|
|
|
|
pinfo->mParticipant->Root(pinfo->mPointer);
|
|
|
|
++numWhiteNodes;
|
2014-10-20 21:07:52 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2007-04-26 01:12:11 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2012-02-24 08:16:37 +04:00
|
|
|
|
2014-10-25 02:06:56 +04:00
|
|
|
mResults.mFreedRefCounted += numWhiteNodes;
|
2014-05-05 21:30:39 +04:00
|
|
|
mResults.mFreedGCed += numWhiteGCed;
|
2014-10-20 21:07:52 +04:00
|
|
|
mResults.mFreedJSZones += numWhiteJSZones;
|
2012-02-24 08:16:37 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
timeLog.Checkpoint("CollectWhite::Root");
|
2008-03-07 20:55:51 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mBeforeUnlinkCB) {
|
|
|
|
mBeforeUnlinkCB();
|
|
|
|
timeLog.Checkpoint("CollectWhite::BeforeUnlinkCB");
|
|
|
|
}
|
2008-03-07 20:55:51 +03:00
|
|
|
|
2014-10-25 02:06:56 +04:00
|
|
|
// Unlink() can trigger a GC, so do not touch any JS or anything
|
|
|
|
// else not in whiteNodes after here.
|
|
|
|
|
2014-12-09 01:45:13 +03:00
|
|
|
for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
|
|
|
|
PtrInfo* pinfo = iter.Get();
|
|
|
|
MOZ_ASSERT(pinfo->mParticipant,
|
|
|
|
"Unlink shouldn't see objects removed from graph.");
|
|
|
|
pinfo->mParticipant->Unlink(pinfo->mPointer);
|
2012-11-28 04:56:06 +04:00
|
|
|
#ifdef DEBUG
|
2017-04-28 00:10:15 +03:00
|
|
|
if (mCCJSRuntime) {
|
|
|
|
mCCJSRuntime->AssertNoObjectsToTrace(pinfo->mPointer);
|
2014-11-12 03:02:34 +03:00
|
|
|
}
|
2014-12-09 01:45:13 +03:00
|
|
|
#endif
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
timeLog.Checkpoint("CollectWhite::Unlink");
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2016-10-27 13:03:53 +03:00
|
|
|
JS::AutoAssertNoGC nogc;
|
2014-12-09 01:45:13 +03:00
|
|
|
for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
|
|
|
|
PtrInfo* pinfo = iter.Get();
|
|
|
|
MOZ_ASSERT(pinfo->mParticipant,
|
|
|
|
"Unroot shouldn't see objects removed from graph.");
|
|
|
|
pinfo->mParticipant->Unroot(pinfo->mPointer);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
timeLog.Checkpoint("CollectWhite::Unroot");
|
2007-01-05 01:31:26 +03:00
|
|
|
|
Bug 1203840 - Trigger dirty pages purge after CC. r=njn,r=smaug,r=mccr8
Jemalloc 4 purges dirty pages regularly during free() when the ratio of dirty
pages compared to active pages is higher than 1 << lg_dirty_mult. We set
lg_dirty_mult in jemalloc_config to limit RSS usage, but it also has an impact
on performance.
So instead of enforcing a high ratio to force more pages being purged, we keep
jemalloc's default ratio of 8, and force a regular purge of all dirty pages,
after cycle collection.
Keeping jemalloc's default ratio avoids cycle-collection-triggered purge to
have to go through really all dirty pages when there are a lot, in which case
the normal jemalloc purge during free() will already have kicked in. It also
takes care of everything that doesn't run the cycle collector still having
a level of purge, like plugins in the plugin-container.
At the same time, since jemalloc_purge_freed_pages does nothing with jemalloc 4,
repurpose the MEMORY_FREE_PURGED_PAGES_MS telemetry probe to track the time
spent in this cycle-collector-triggered purge.
2015-09-11 08:12:21 +03:00
|
|
|
nsCycleCollector_dispatchDeferredDeletion(false, true);
|
2014-05-07 04:25:27 +04:00
|
|
|
timeLog.Checkpoint("CollectWhite::dispatchDeferredDeletion");
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mIncrementalPhase = CleanupPhase;
|
2013-07-09 21:30:58 +04:00
|
|
|
|
2014-10-25 02:06:56 +04:00
|
|
|
return numWhiteNodes > 0 || numWhiteGCed > 0 || numWhiteJSZones > 0;
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2013-01-15 04:26:47 +04:00
|
|
|
////////////////////////
|
2013-11-07 09:35:30 +04:00
|
|
|
// Memory reporting
|
2013-01-15 04:26:47 +04:00
|
|
|
////////////////////////
|
|
|
|
|
2013-12-08 09:39:47 +04:00
|
|
|
MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)
|
|
|
|
|
2013-11-07 09:35:30 +04:00
|
|
|
NS_IMETHODIMP
|
|
|
|
nsCycleCollector::CollectReports(nsIHandleReportCallback* aHandleReport,
|
2014-05-21 10:06:54 +04:00
|
|
|
nsISupports* aData, bool aAnonymize) {
|
2015-09-04 19:45:44 +03:00
|
|
|
size_t objectSize, graphSize, purpleBufferSize;
|
2014-05-05 21:30:39 +04:00
|
|
|
SizeOfIncludingThis(CycleCollectorMallocSizeOf, &objectSize, &graphSize,
|
|
|
|
&purpleBufferSize);
|
2013-11-07 09:35:30 +04:00
|
|
|
|
2016-08-24 08:23:45 +03:00
|
|
|
if (objectSize > 0) {
|
|
|
|
MOZ_COLLECT_REPORT("explicit/cycle-collector/collector-object", KIND_HEAP,
|
|
|
|
UNITS_BYTES, objectSize,
|
|
|
|
"Memory used for the cycle collector object itself.");
|
|
|
|
}
|
|
|
|
|
|
|
|
if (graphSize > 0) {
|
|
|
|
MOZ_COLLECT_REPORT(
|
|
|
|
"explicit/cycle-collector/graph", KIND_HEAP, UNITS_BYTES, graphSize,
|
|
|
|
"Memory used for the cycle collector's graph. This should be zero when "
|
|
|
|
"the collector is idle.");
|
|
|
|
}
|
|
|
|
|
|
|
|
if (purpleBufferSize > 0) {
|
|
|
|
MOZ_COLLECT_REPORT("explicit/cycle-collector/purple-buffer", KIND_HEAP,
|
|
|
|
UNITS_BYTES, purpleBufferSize,
|
|
|
|
"Memory used for the cycle collector's purple buffer.");
|
|
|
|
}
|
2013-11-07 09:35:30 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
return NS_OK;
|
2013-01-15 04:26:47 +04:00
|
|
|
};
|
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// Collector implementation
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2013-08-13 21:45:32 +04:00
|
|
|
nsCycleCollector::nsCycleCollector()
|
2014-05-05 21:30:39 +04:00
|
|
|
: mActivelyCollecting(false),
|
|
|
|
mFreeingSnowWhite(false),
|
|
|
|
mScanInProgress(false),
|
2017-04-28 00:10:15 +03:00
|
|
|
mCCJSRuntime(nullptr),
|
2014-05-05 21:30:39 +04:00
|
|
|
mIncrementalPhase(IdlePhase),
|
2016-02-26 18:52:08 +03:00
|
|
|
#ifdef DEBUG
|
2017-06-01 23:44:20 +03:00
|
|
|
mEventTarget(GetCurrentThreadSerialEventTarget()),
|
2016-02-26 18:52:08 +03:00
|
|
|
#endif
|
2014-05-05 21:30:39 +04:00
|
|
|
mWhiteNodeCount(0),
|
|
|
|
mBeforeUnlinkCB(nullptr),
|
|
|
|
mForgetSkippableCB(nullptr),
|
|
|
|
mUnmergedNeeded(0),
|
2015-01-18 21:59:21 +03:00
|
|
|
mMergedInARow(0) {
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
nsCycleCollector::~nsCycleCollector() {
|
2018-05-28 23:23:45 +03:00
|
|
|
MOZ_ASSERT(!mJSPurpleBuffer, "Didn't call JSPurpleBuffer::Destroy?");
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
UnregisterWeakMemoryReporter(this);
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
void nsCycleCollector::SetCCJSRuntime(CycleCollectedJSRuntime* aCCRuntime) {
|
|
|
|
MOZ_RELEASE_ASSERT(
|
|
|
|
!mCCJSRuntime,
|
|
|
|
"Multiple registrations of CycleCollectedJSRuntime in cycle collector");
|
|
|
|
mCCJSRuntime = aCCRuntime;
|
2013-01-15 04:26:47 +04:00
|
|
|
|
2016-09-10 01:14:15 +03:00
|
|
|
if (!NS_IsMainThread()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We can't register as a reporter in nsCycleCollector() because that runs
|
|
|
|
// before the memory reporter manager is initialized. So we do it here
|
|
|
|
// instead.
|
2016-09-10 01:14:15 +03:00
|
|
|
RegisterWeakMemoryReporter(this);
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
void nsCycleCollector::ClearCCJSRuntime() {
|
|
|
|
MOZ_RELEASE_ASSERT(mCCJSRuntime,
|
|
|
|
"Clearing CycleCollectedJSRuntime in cycle collector "
|
|
|
|
"before a runtime was registered");
|
|
|
|
mCCJSRuntime = nullptr;
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2007-05-24 18:10:02 +04:00
|
|
|
#ifdef DEBUG
|
2014-05-13 21:41:38 +04:00
|
|
|
static bool HasParticipant(void* aPtr, nsCycleCollectionParticipant* aParti) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (aParti) {
|
|
|
|
return true;
|
|
|
|
}
|
2007-05-24 18:10:02 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
nsXPCOMCycleCollectionParticipant* xcp;
|
2014-05-05 21:30:39 +04:00
|
|
|
ToParticipant(static_cast<nsISupports*>(aPtr), &xcp);
|
|
|
|
return xcp != nullptr;
|
2007-05-24 18:10:02 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_ALWAYS_INLINE void nsCycleCollector::Suspect(
|
|
|
|
void* aPtr, nsCycleCollectionParticipant* aParti,
|
|
|
|
nsCycleCollectingAutoRefCnt* aRefCnt) {
|
2014-05-05 21:30:39 +04:00
|
|
|
CheckThreadSafety();
|
2009-10-28 20:28:57 +03:00
|
|
|
|
2015-07-29 21:48:00 +03:00
|
|
|
// Don't call AddRef or Release of a CCed object in a Traverse() method.
|
|
|
|
MOZ_ASSERT(!mScanInProgress,
|
|
|
|
"Attempted to call Suspect() while a scan was in progress");
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (MOZ_UNLIKELY(mScanInProgress)) {
|
|
|
|
return;
|
|
|
|
}
|
2013-11-27 02:30:46 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(aPtr, "Don't suspect null pointers");
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(HasParticipant(aPtr, aParti),
|
|
|
|
"Suspected nsISupports pointer must QI to "
|
|
|
|
"nsXPCOMCycleCollectionParticipant");
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2017-07-29 01:24:17 +03:00
|
|
|
MOZ_ASSERT(aParti || CanonicalizeXPCOMParticipant(
|
|
|
|
static_cast<nsISupports*>(aPtr)) == aPtr,
|
|
|
|
"Suspect nsISupports pointer must be canonical");
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mPurpleBuf.Put(aPtr, aParti, aRefCnt);
|
2009-05-07 00:46:04 +04:00
|
|
|
}
|
|
|
|
|
2017-08-22 00:01:47 +03:00
|
|
|
void nsCycleCollector::SuspectNurseryEntries() {
|
|
|
|
MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
|
|
|
|
while (gNurseryPurpleBufferEntryCount) {
|
|
|
|
NurseryPurpleBufferEntry& entry =
|
|
|
|
gNurseryPurpleBufferEntry[--gNurseryPurpleBufferEntryCount];
|
|
|
|
mPurpleBuf.Put(entry.mPtr, entry.mParticipant, entry.mRefCnt);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-03-26 01:26:00 +04:00
|
|
|
void nsCycleCollector::CheckThreadSafety() {
|
|
|
|
#ifdef DEBUG
|
2017-06-01 23:44:20 +03:00
|
|
|
MOZ_ASSERT(mEventTarget->IsOnCurrentThread());
|
2013-03-26 01:26:00 +04:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2011-06-16 02:53:00 +04:00
|
|
|
// The cycle collector uses the mark bitmap to discover what JS objects
|
|
|
|
// were reachable only from XPConnect roots that might participate in
|
2016-09-14 16:47:32 +03:00
|
|
|
// cycles. We ask the JS context whether we need to force a GC before
|
2011-06-16 02:53:00 +04:00
|
|
|
// this CC. It returns true on startup (before the mark bits have been set),
|
2013-05-04 19:39:44 +04:00
|
|
|
// and also when UnmarkGray has run out of stack. We also force GCs on shut
|
2011-06-16 02:53:00 +04:00
|
|
|
// down to collect cycles involving both DOM and JS.
|
2015-05-15 20:33:09 +03:00
|
|
|
void nsCycleCollector::FixGrayBits(bool aForceGC, TimeLog& aTimeLog) {
|
2014-05-05 21:30:39 +04:00
|
|
|
CheckThreadSafety();
|
2011-06-16 02:53:00 +04:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
if (!mCCJSRuntime) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2011-06-14 00:24:23 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!aForceGC) {
|
2017-04-28 00:10:15 +03:00
|
|
|
mCCJSRuntime->FixWeakMappingGrayBits();
|
2015-05-15 20:33:09 +03:00
|
|
|
aTimeLog.Checkpoint("FixWeakMappingGrayBits");
|
2013-01-08 22:36:51 +04:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
bool needGC = !mCCJSRuntime->AreGCGrayBitsValid();
|
2014-05-05 21:30:39 +04:00
|
|
|
// Only do a telemetry ping for non-shutdown CCs.
|
|
|
|
CC_TELEMETRY(_NEED_GC, needGC);
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!needGC) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
2014-05-13 21:41:38 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
mResults.mForcedGC = true;
|
|
|
|
}
|
2011-06-16 02:53:00 +04:00
|
|
|
|
2017-03-05 12:19:24 +03:00
|
|
|
uint32_t count = 0;
|
|
|
|
do {
|
2019-01-21 16:09:12 +03:00
|
|
|
mCCJSRuntime->GarbageCollect(aForceGC ? JS::GCReason::SHUTDOWN_CC
|
|
|
|
: JS::GCReason::CC_FORCED);
|
2017-03-05 12:19:24 +03:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
mCCJSRuntime->FixWeakMappingGrayBits();
|
2017-03-05 12:19:24 +03:00
|
|
|
|
|
|
|
// It's possible that FixWeakMappingGrayBits will hit OOM when unmarking
|
|
|
|
// gray and we will have to go round again. The second time there should not
|
|
|
|
// be any weak mappings to fix up so the loop body should run at most twice.
|
2017-09-15 23:01:15 +03:00
|
|
|
MOZ_RELEASE_ASSERT(count < 2);
|
|
|
|
count++;
|
2017-04-28 00:10:15 +03:00
|
|
|
} while (!mCCJSRuntime->AreGCGrayBitsValid());
|
2017-03-05 12:19:24 +03:00
|
|
|
|
|
|
|
aTimeLog.Checkpoint("FixGrayBits");
|
2011-06-16 02:53:00 +04:00
|
|
|
}
|
|
|
|
|
2015-05-15 20:33:09 +03:00
|
|
|
bool nsCycleCollector::IsIncrementalGCInProgress() {
|
2017-04-28 00:10:15 +03:00
|
|
|
return mCCJSRuntime && JS::IsIncrementalGCInProgress(mCCJSRuntime->Runtime());
|
2015-05-15 20:33:09 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void nsCycleCollector::FinishAnyIncrementalGCInProgress() {
|
|
|
|
if (IsIncrementalGCInProgress()) {
|
|
|
|
NS_WARNING("Finishing incremental GC in progress during CC");
|
2017-04-28 00:10:15 +03:00
|
|
|
JSContext* cx = CycleCollectedJSContext::Get()->Context();
|
|
|
|
JS::PrepareForIncrementalGC(cx);
|
2019-01-21 16:09:12 +03:00
|
|
|
JS::FinishIncrementalGC(cx, JS::GCReason::CC_FORCED);
|
2015-05-15 20:33:09 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-11-12 01:52:30 +03:00
|
|
|
void nsCycleCollector::CleanupAfterCollection() {
|
2014-05-07 04:25:27 +04:00
|
|
|
TimeLog timeLog;
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(mIncrementalPhase == CleanupPhase);
|
2017-05-24 20:13:02 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!mScanInProgress);
|
2014-05-05 21:30:39 +04:00
|
|
|
mGraph.Clear();
|
2014-05-07 04:25:27 +04:00
|
|
|
timeLog.Checkpoint("CleanupAfterCollection::mGraph.Clear()");
|
2007-03-16 15:52:47 +03:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
uint32_t interval =
|
|
|
|
(uint32_t)((TimeStamp::Now() - mCollectionStart).ToMilliseconds());
|
2007-03-20 02:21:31 +03:00
|
|
|
#ifdef COLLECT_TIME_DEBUG
|
2014-05-13 21:41:38 +04:00
|
|
|
printf("cc: total cycle collector time was %ums in %u slices\n", interval,
|
|
|
|
mResults.mNumSlices);
|
2014-05-05 21:30:39 +04:00
|
|
|
printf(
|
|
|
|
"cc: visited %u ref counted and %u GCed objects, freed %d ref counted "
|
|
|
|
"and %d GCed objects",
|
|
|
|
mResults.mVisitedRefCounted, mResults.mVisitedGCed,
|
|
|
|
mResults.mFreedRefCounted, mResults.mFreedGCed);
|
|
|
|
uint32_t numVisited = mResults.mVisitedRefCounted + mResults.mVisitedGCed;
|
|
|
|
if (numVisited > 1000) {
|
|
|
|
uint32_t numFreed = mResults.mFreedRefCounted + mResults.mFreedGCed;
|
|
|
|
printf(" (%d%%)", 100 * numFreed / numVisited);
|
|
|
|
}
|
|
|
|
printf(".\ncc: \n");
|
2007-03-20 02:21:31 +03:00
|
|
|
#endif
|
2014-05-07 04:25:27 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
CC_TELEMETRY(, interval);
|
|
|
|
CC_TELEMETRY(_VISITED_REF_COUNTED, mResults.mVisitedRefCounted);
|
|
|
|
CC_TELEMETRY(_VISITED_GCED, mResults.mVisitedGCed);
|
|
|
|
CC_TELEMETRY(_COLLECTED, mWhiteNodeCount);
|
2014-05-07 04:25:27 +04:00
|
|
|
timeLog.Checkpoint("CleanupAfterCollection::telemetry");
|
2013-11-21 02:35:16 +04:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
if (mCCJSRuntime) {
|
|
|
|
mCCJSRuntime->FinalizeDeferredThings(
|
2016-09-14 16:47:32 +03:00
|
|
|
mResults.mAnyManual ? CycleCollectedJSContext::FinalizeNow
|
|
|
|
: CycleCollectedJSContext::FinalizeIncrementally);
|
2017-04-28 00:10:15 +03:00
|
|
|
mCCJSRuntime->EndCycleCollectionCallback(mResults);
|
2014-05-07 04:25:27 +04:00
|
|
|
timeLog.Checkpoint("CleanupAfterCollection::EndCycleCollectionCallback()");
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
mIncrementalPhase = IdlePhase;
|
2010-11-12 01:52:30 +03:00
|
|
|
}
|
|
|
|
|
2013-09-11 03:33:40 +04:00
|
|
|
void nsCycleCollector::ShutdownCollect() {
|
2015-05-15 20:33:09 +03:00
|
|
|
FinishAnyIncrementalGCInProgress();
|
2017-08-16 22:38:53 +03:00
|
|
|
JS::ShutdownAsyncTasks(CycleCollectedJSContext::Get()->Context());
|
2015-05-15 20:33:09 +03:00
|
|
|
|
2015-08-11 18:42:24 +03:00
|
|
|
SliceBudget unlimitedBudget = SliceBudget::unlimited();
|
2014-05-05 21:30:39 +04:00
|
|
|
uint32_t i;
|
|
|
|
for (i = 0; i < DEFAULT_SHUTDOWN_COLLECTIONS; ++i) {
|
|
|
|
if (!Collect(ShutdownCC, unlimitedBudget, nullptr)) {
|
|
|
|
break;
|
2010-11-12 01:52:30 +03:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2016-09-01 08:01:16 +03:00
|
|
|
NS_WARNING_ASSERTION(i < NORMAL_SHUTDOWN_COLLECTIONS, "Extra shutdown CC");
|
2007-11-02 01:51:57 +03:00
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
static void PrintPhase(const char* aPhase) {
|
2013-12-03 22:47:47 +04:00
|
|
|
#ifdef DEBUG_PHASES
|
2014-05-05 21:30:39 +04:00
|
|
|
printf("cc: begin %s on %s\n", aPhase,
|
|
|
|
NS_IsMainThread() ? "mainthread" : "worker");
|
2013-12-03 22:47:47 +04:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2013-04-30 03:41:41 +04:00
|
|
|
bool nsCycleCollector::Collect(ccType aCCType, SliceBudget& aBudget,
|
2014-11-27 14:47:51 +03:00
|
|
|
nsICycleCollectorListener* aManualListener,
|
|
|
|
bool aPreferShorterSlices) {
|
2014-05-05 21:30:39 +04:00
|
|
|
CheckThreadSafety();
|
2013-08-13 21:45:32 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// This can legitimately happen in a few cases. See bug 383651.
|
2018-07-30 18:48:17 +03:00
|
|
|
// When recording/replaying we do not collect cycles.
|
|
|
|
if (mActivelyCollecting || mFreeingSnowWhite ||
|
|
|
|
recordreplay::IsRecordingOrReplaying()) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
mActivelyCollecting = true;
|
|
|
|
|
2015-05-15 20:33:09 +03:00
|
|
|
MOZ_ASSERT(!IsIncrementalGCInProgress());
|
|
|
|
|
2015-06-11 00:05:53 +03:00
|
|
|
mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
|
|
|
|
if (NS_IsMainThread()) {
|
2015-11-23 18:50:56 +03:00
|
|
|
marker.emplace("nsCycleCollector::Collect", MarkerStackRequest::NO_STACK);
|
2015-06-11 00:05:53 +03:00
|
|
|
}
|
|
|
|
|
2015-09-25 20:43:21 +03:00
|
|
|
bool startedIdle = IsIdle();
|
2014-05-05 21:30:39 +04:00
|
|
|
bool collectedAny = false;
|
|
|
|
|
|
|
|
// If the CC started idle, it will call BeginCollection, which
|
|
|
|
// will do FreeSnowWhite, so it doesn't need to be done here.
|
|
|
|
if (!startedIdle) {
|
2014-05-07 04:25:27 +04:00
|
|
|
TimeLog timeLog;
|
2014-05-05 21:30:39 +04:00
|
|
|
FreeSnowWhite(true);
|
2014-05-07 04:25:27 +04:00
|
|
|
timeLog.Checkpoint("Collect::FreeSnowWhite");
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
|
2015-05-13 22:48:52 +03:00
|
|
|
if (aCCType != SliceCC) {
|
|
|
|
mResults.mAnyManual = true;
|
|
|
|
}
|
|
|
|
|
2014-05-07 04:25:26 +04:00
|
|
|
++mResults.mNumSlices;
|
|
|
|
|
2014-11-27 14:47:51 +03:00
|
|
|
bool continueSlice = aBudget.isUnlimited() || !aPreferShorterSlices;
|
2014-05-05 21:30:39 +04:00
|
|
|
do {
|
|
|
|
switch (mIncrementalPhase) {
|
|
|
|
case IdlePhase:
|
|
|
|
PrintPhase("BeginCollection");
|
|
|
|
BeginCollection(aCCType, aManualListener);
|
|
|
|
break;
|
|
|
|
case GraphBuildingPhase:
|
|
|
|
PrintPhase("MarkRoots");
|
|
|
|
MarkRoots(aBudget);
|
2014-05-14 20:45:50 +04:00
|
|
|
|
|
|
|
// Only continue this slice if we're running synchronously or the
|
|
|
|
// next phase will probably be short, to reduce the max pause for this
|
|
|
|
// collection.
|
|
|
|
// (There's no need to check if we've finished graph building, because
|
|
|
|
// if we haven't, we've already exceeded our budget, and will finish
|
|
|
|
// this slice anyways.)
|
2014-11-27 14:47:51 +03:00
|
|
|
continueSlice = aBudget.isUnlimited() ||
|
|
|
|
(mResults.mNumSlices < 3 && !aPreferShorterSlices);
|
2014-05-05 21:30:39 +04:00
|
|
|
break;
|
|
|
|
case ScanAndCollectWhitePhase:
|
|
|
|
// We do ScanRoots and CollectWhite in a single slice to ensure
|
|
|
|
// that we won't unlink a live object if a weak reference is
|
|
|
|
// promoted to a strong reference after ScanRoots has finished.
|
|
|
|
// See bug 926533.
|
|
|
|
PrintPhase("ScanRoots");
|
|
|
|
ScanRoots(startedIdle);
|
|
|
|
PrintPhase("CollectWhite");
|
|
|
|
collectedAny = CollectWhite();
|
|
|
|
break;
|
|
|
|
case CleanupPhase:
|
|
|
|
PrintPhase("CleanupAfterCollection");
|
|
|
|
CleanupAfterCollection();
|
2014-05-14 20:45:50 +04:00
|
|
|
continueSlice = false;
|
2014-05-05 21:30:39 +04:00
|
|
|
break;
|
|
|
|
}
|
2014-05-14 20:45:50 +04:00
|
|
|
if (continueSlice) {
|
2014-10-22 16:13:00 +04:00
|
|
|
// Force SliceBudget::isOverBudget to check the time.
|
|
|
|
aBudget.step(SliceBudget::CounterReset);
|
|
|
|
continueSlice = !aBudget.isOverBudget();
|
2014-05-14 20:45:50 +04:00
|
|
|
}
|
|
|
|
} while (continueSlice);
|
2014-05-05 21:30:39 +04:00
|
|
|
|
|
|
|
// Clear mActivelyCollecting here to ensure that a recursive call to
|
|
|
|
// Collect() does something.
|
|
|
|
mActivelyCollecting = false;
|
|
|
|
|
|
|
|
if (aCCType != SliceCC && !startedIdle) {
|
|
|
|
// We were in the middle of an incremental CC (using its own listener).
|
|
|
|
// Somebody has forced a CC, so after having finished out the current CC,
|
|
|
|
// run the CC again using the new listener.
|
2015-09-25 20:43:21 +03:00
|
|
|
MOZ_ASSERT(IsIdle());
|
2014-05-05 21:30:39 +04:00
|
|
|
if (Collect(aCCType, aBudget, aManualListener)) {
|
|
|
|
collectedAny = true;
|
2013-12-03 22:47:47 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-12-03 22:47:47 +04:00
|
|
|
|
2015-09-25 20:43:21 +03:00
|
|
|
MOZ_ASSERT_IF(aCCType != SliceCC, IsIdle());
|
2013-12-03 22:47:46 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
return collectedAny;
|
2013-03-26 01:26:00 +04:00
|
|
|
}
|
|
|
|
|
2013-12-06 22:17:20 +04:00
|
|
|
// Any JS objects we have in the graph could die when we GC, but we
|
|
|
|
// don't want to abandon the current CC, because the graph contains
|
|
|
|
// information about purple roots. So we synchronously finish off
|
|
|
|
// the current CC.
|
2013-12-22 18:58:19 +04:00
|
|
|
void nsCycleCollector::PrepareForGarbageCollection() {
|
2015-09-25 20:43:21 +03:00
|
|
|
if (IsIdle()) {
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(mGraph.IsEmpty(), "Non-empty graph when idle");
|
|
|
|
MOZ_ASSERT(!mBuilder, "Non-null builder when idle");
|
|
|
|
if (mJSPurpleBuffer) {
|
|
|
|
mJSPurpleBuffer->Destroy();
|
2013-12-06 22:17:20 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
|
|
|
}
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2014-05-07 04:25:26 +04:00
|
|
|
FinishAnyCurrentCollection();
|
|
|
|
}
|
|
|
|
|
|
|
|
void nsCycleCollector::FinishAnyCurrentCollection() {
|
2015-09-25 20:43:21 +03:00
|
|
|
if (IsIdle()) {
|
2014-05-07 04:25:26 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-08-11 18:42:24 +03:00
|
|
|
SliceBudget unlimitedBudget = SliceBudget::unlimited();
|
2014-05-07 04:25:26 +04:00
|
|
|
PrintPhase("FinishAnyCurrentCollection");
|
2014-05-05 21:30:39 +04:00
|
|
|
// Use SliceCC because we only want to finish the CC in progress.
|
|
|
|
Collect(SliceCC, unlimitedBudget, nullptr);
|
2014-10-25 02:06:55 +04:00
|
|
|
|
2015-01-22 01:35:54 +03:00
|
|
|
// It is only okay for Collect() to have failed to finish the
|
|
|
|
// current CC if we're reentering the CC at some point past
|
|
|
|
// graph building. We need to be past the point where the CC will
|
|
|
|
// look at JS objects so that it is safe to GC.
|
|
|
|
MOZ_ASSERT(IsIdle() || (mActivelyCollecting &&
|
|
|
|
mIncrementalPhase != GraphBuildingPhase),
|
|
|
|
"Reentered CC during graph building");
|
2013-12-06 22:17:20 +04:00
|
|
|
}
|
|
|
|
|
2013-04-30 03:41:41 +04:00
|
|
|
// Don't merge too many times in a row, and do at least a minimum
|
|
|
|
// number of unmerged CCs in a row.
|
|
|
|
static const uint32_t kMinConsecutiveUnmerged = 3;
|
|
|
|
static const uint32_t kMaxConsecutiveMerged = 3;
|
|
|
|
|
|
|
|
bool nsCycleCollector::ShouldMergeZones(ccType aCCType) {
|
2017-04-28 00:10:15 +03:00
|
|
|
if (!mCCJSRuntime) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return false;
|
|
|
|
}
|
2013-04-30 03:41:41 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(mUnmergedNeeded <= kMinConsecutiveUnmerged);
|
|
|
|
MOZ_ASSERT(mMergedInARow <= kMaxConsecutiveMerged);
|
2013-04-30 03:41:41 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mMergedInARow == kMaxConsecutiveMerged) {
|
|
|
|
MOZ_ASSERT(mUnmergedNeeded == 0);
|
|
|
|
mUnmergedNeeded = kMinConsecutiveUnmerged;
|
|
|
|
}
|
2013-04-30 03:41:41 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mUnmergedNeeded > 0) {
|
|
|
|
mUnmergedNeeded--;
|
|
|
|
mMergedInARow = 0;
|
|
|
|
return false;
|
|
|
|
}
|
2013-04-30 03:41:41 +04:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
if (aCCType == SliceCC && mCCJSRuntime->UsefulToMergeZones()) {
|
2014-05-05 21:30:39 +04:00
|
|
|
mMergedInARow++;
|
|
|
|
return true;
|
|
|
|
} else {
|
|
|
|
mMergedInARow = 0;
|
|
|
|
return false;
|
|
|
|
}
|
2013-04-30 03:41:41 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void nsCycleCollector::BeginCollection(
|
2014-05-13 21:41:38 +04:00
|
|
|
ccType aCCType, nsICycleCollectorListener* aManualListener) {
|
2014-05-05 21:30:39 +04:00
|
|
|
TimeLog timeLog;
|
2015-09-25 20:43:21 +03:00
|
|
|
MOZ_ASSERT(IsIdle());
|
2017-05-24 20:13:02 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!mScanInProgress);
|
2013-11-21 02:35:16 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
mCollectionStart = TimeStamp::Now();
|
2013-11-21 02:35:16 +04:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
if (mCCJSRuntime) {
|
|
|
|
mCCJSRuntime->BeginCycleCollectionCallback();
|
2014-05-05 21:30:39 +04:00
|
|
|
timeLog.Checkpoint("BeginCycleCollectionCallback()");
|
|
|
|
}
|
2013-11-21 02:35:16 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
bool isShutdown = (aCCType == ShutdownCC);
|
2013-09-11 03:33:41 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// Set up the listener for this CC.
|
|
|
|
MOZ_ASSERT_IF(isShutdown, !aManualListener);
|
2015-06-05 00:41:31 +03:00
|
|
|
MOZ_ASSERT(!mLogger, "Forgot to clear a previous listener?");
|
2015-06-05 00:41:31 +03:00
|
|
|
|
|
|
|
if (aManualListener) {
|
2015-06-05 00:41:31 +03:00
|
|
|
aManualListener->AsLogger(getter_AddRefs(mLogger));
|
2015-06-05 00:41:31 +03:00
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
aManualListener = nullptr;
|
2015-06-05 00:41:31 +03:00
|
|
|
if (!mLogger && mParams.LogThisCC(isShutdown)) {
|
2015-06-05 00:41:31 +03:00
|
|
|
mLogger = new nsCycleCollectorLogger();
|
2014-05-05 21:30:39 +04:00
|
|
|
if (mParams.AllTracesThisCC(isShutdown)) {
|
2015-06-05 00:41:31 +03:00
|
|
|
mLogger->SetAllTraces();
|
2013-09-11 03:33:41 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-09-11 03:33:41 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
// On a WantAllTraces CC, force a synchronous global GC to prevent
|
|
|
|
// hijinks from ForgetSkippable and compartmental GCs.
|
|
|
|
bool forceGC = isShutdown || (mLogger && mLogger->IsAllTraces());
|
2015-05-15 20:33:09 +03:00
|
|
|
|
|
|
|
// BeginCycleCollectionCallback() might have started an IGC, and we need
|
|
|
|
// to finish it before we run FixGrayBits.
|
|
|
|
FinishAnyIncrementalGCInProgress();
|
|
|
|
timeLog.Checkpoint("Pre-FixGrayBits finish IGC");
|
|
|
|
|
2015-05-15 20:33:09 +03:00
|
|
|
FixGrayBits(forceGC, timeLog);
|
2017-04-28 00:10:15 +03:00
|
|
|
if (mCCJSRuntime) {
|
|
|
|
mCCJSRuntime->CheckGrayBits();
|
2017-03-05 12:23:33 +03:00
|
|
|
}
|
2013-09-11 03:33:41 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
FreeSnowWhite(true);
|
2015-05-15 20:33:09 +03:00
|
|
|
timeLog.Checkpoint("BeginCollection FreeSnowWhite");
|
2013-09-11 03:33:41 +04:00
|
|
|
|
2015-06-05 00:41:31 +03:00
|
|
|
if (mLogger && NS_FAILED(mLogger->Begin())) {
|
|
|
|
mLogger = nullptr;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2012-02-12 20:02:01 +04:00
|
|
|
|
2015-05-15 20:33:09 +03:00
|
|
|
// FreeSnowWhite could potentially have started an IGC, which we need
|
|
|
|
// to finish before we look at any JS roots.
|
|
|
|
FinishAnyIncrementalGCInProgress();
|
|
|
|
timeLog.Checkpoint("Post-FreeSnowWhite finish IGC");
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// Set up the data structures for building the graph.
|
2016-10-27 13:03:53 +03:00
|
|
|
JS::AutoAssertNoGC nogc;
|
2017-04-28 00:10:15 +03:00
|
|
|
JS::AutoEnterCycleCollection autocc(mCCJSRuntime->Runtime());
|
2014-05-05 21:30:39 +04:00
|
|
|
mGraph.Init();
|
|
|
|
mResults.Init();
|
2015-05-13 22:48:52 +03:00
|
|
|
mResults.mAnyManual = (aCCType != SliceCC);
|
2014-05-05 21:30:39 +04:00
|
|
|
bool mergeZones = ShouldMergeZones(aCCType);
|
|
|
|
mResults.mMergedZones = mergeZones;
|
2013-04-30 03:41:41 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(!mBuilder, "Forgot to clear mBuilder");
|
2017-04-28 00:10:15 +03:00
|
|
|
mBuilder =
|
|
|
|
new CCGraphBuilder(mGraph, mResults, mCCJSRuntime, mLogger, mergeZones);
|
2015-05-15 20:33:09 +03:00
|
|
|
timeLog.Checkpoint("BeginCollection prepare graph builder");
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2017-04-28 00:10:15 +03:00
|
|
|
if (mCCJSRuntime) {
|
|
|
|
mCCJSRuntime->TraverseRoots(*mBuilder);
|
2016-09-14 16:47:32 +03:00
|
|
|
timeLog.Checkpoint("mJSContext->TraverseRoots()");
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2007-03-20 02:21:31 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
AutoRestore<bool> ar(mScanInProgress);
|
2016-12-09 00:03:42 +03:00
|
|
|
MOZ_RELEASE_ASSERT(!mScanInProgress);
|
2014-05-05 21:30:39 +04:00
|
|
|
mScanInProgress = true;
|
|
|
|
mPurpleBuf.SelectPointers(*mBuilder);
|
|
|
|
timeLog.Checkpoint("SelectPointers()");
|
2007-03-20 02:21:31 +03:00
|
|
|
|
2014-12-20 21:35:23 +03:00
|
|
|
mBuilder->DoneAddingRoots();
|
2014-05-05 21:30:39 +04:00
|
|
|
mIncrementalPhase = GraphBuildingPhase;
|
2008-01-10 17:10:03 +03:00
|
|
|
}
|
|
|
|
|
2008-02-25 20:47:25 +03:00
|
|
|
uint32_t nsCycleCollector::SuspectedCount() {
|
2014-05-05 21:30:39 +04:00
|
|
|
CheckThreadSafety();
|
2017-08-22 00:01:47 +03:00
|
|
|
if (NS_IsMainThread()) {
|
|
|
|
return gNurseryPurpleBufferEntryCount + mPurpleBuf.Count();
|
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
return mPurpleBuf.Count();
|
2008-02-25 20:47:25 +03:00
|
|
|
}
|
|
|
|
|
2016-09-08 23:04:30 +03:00
|
|
|
void nsCycleCollector::Shutdown(bool aDoCollect) {
|
2014-05-05 21:30:39 +04:00
|
|
|
CheckThreadSafety();
|
2013-09-07 00:41:42 +04:00
|
|
|
|
2017-08-22 00:01:47 +03:00
|
|
|
if (NS_IsMainThread()) {
|
|
|
|
gNurseryPurpleBufferEnabled = false;
|
|
|
|
}
|
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// Always delete snow white objects.
|
|
|
|
FreeSnowWhite(true);
|
2013-07-09 21:30:58 +04:00
|
|
|
|
2016-09-08 23:04:30 +03:00
|
|
|
if (aDoCollect) {
|
2014-05-05 21:30:39 +04:00
|
|
|
ShutdownCollect();
|
|
|
|
}
|
2018-05-28 23:23:45 +03:00
|
|
|
|
|
|
|
if (mJSPurpleBuffer) {
|
|
|
|
mJSPurpleBuffer->Destroy();
|
|
|
|
}
|
2007-01-25 03:24:08 +03:00
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void nsCycleCollector::RemoveObjectFromGraph(void* aObj) {
|
2015-09-25 20:43:21 +03:00
|
|
|
if (IsIdle()) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
|
|
|
}
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2015-09-25 20:43:21 +03:00
|
|
|
mGraph.RemoveObjectFromMap(aObj);
|
2018-09-04 23:22:37 +03:00
|
|
|
if (mBuilder) {
|
|
|
|
mBuilder->RemoveCachedEntry(aObj);
|
|
|
|
}
|
2013-12-06 22:17:20 +04:00
|
|
|
}
|
|
|
|
|
2013-11-07 09:35:30 +04:00
|
|
|
void nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
|
2014-05-13 21:41:38 +04:00
|
|
|
size_t* aObjectSize,
|
2015-09-04 19:45:44 +03:00
|
|
|
size_t* aGraphSize,
|
2014-05-13 21:41:38 +04:00
|
|
|
size_t* aPurpleBufferSize) const {
|
2014-05-05 21:30:39 +04:00
|
|
|
*aObjectSize = aMallocSizeOf(this);
|
2012-11-07 05:38:29 +04:00
|
|
|
|
2015-09-04 19:45:44 +03:00
|
|
|
*aGraphSize = mGraph.SizeOfExcludingThis(aMallocSizeOf);
|
2012-11-07 05:38:29 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
*aPurpleBufferSize = mPurpleBuf.SizeOfExcludingThis(aMallocSizeOf);
|
2012-11-07 05:38:29 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// These fields are deliberately not measured:
|
2017-04-28 00:10:15 +03:00
|
|
|
// - mCCJSRuntime: because it's non-owning and measured by JS reporters.
|
2014-05-05 21:30:39 +04:00
|
|
|
// - mParams: because it only contains scalars.
|
2011-07-09 02:49:31 +04:00
|
|
|
}
|
|
|
|
|
2014-01-15 00:23:59 +04:00
|
|
|
JSPurpleBuffer* nsCycleCollector::GetJSPurpleBuffer() {
|
|
|
|
if (!mJSPurpleBuffer) {
|
2014-08-01 01:43:45 +04:00
|
|
|
// The Release call here confuses the GC analysis.
|
|
|
|
JS::AutoSuppressGCAnalysis nogc;
|
2014-01-15 00:23:59 +04:00
|
|
|
// JSPurpleBuffer keeps itself alive, but we need to create it in such way
|
|
|
|
// that it ends up in the normal purple buffer. That happens when
|
|
|
|
// nsRefPtr goes out of the scope and calls Release.
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<JSPurpleBuffer> pb = new JSPurpleBuffer(mJSPurpleBuffer);
|
2014-01-15 00:23:59 +04:00
|
|
|
}
|
|
|
|
return mJSPurpleBuffer;
|
|
|
|
}
|
|
|
|
|
2007-01-05 01:31:26 +03:00
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
// Module public API (exported in nsCycleCollector.h)
|
|
|
|
// Just functions that redirect into the singleton, once it's built.
|
|
|
|
////////////////////////////////////////////////////////////////////////
|
|
|
|
|
2016-09-14 16:47:32 +03:00
|
|
|
void nsCycleCollector_registerJSContext(CycleCollectedJSContext* aCx) {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2016-09-14 16:47:32 +03:00
|
|
|
// But we shouldn't already have a context.
|
|
|
|
MOZ_ASSERT(!data->mContext);
|
2013-06-18 23:02:16 +04:00
|
|
|
|
2016-09-14 16:47:32 +03:00
|
|
|
data->mContext = aCx;
|
2017-04-28 00:10:15 +03:00
|
|
|
data->mCollector->SetCCJSRuntime(aCx->Runtime());
|
2007-01-05 01:31:26 +03:00
|
|
|
}
|
|
|
|
|
2016-09-14 16:47:32 +03:00
|
|
|
void nsCycleCollector_forgetJSContext() {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2007-01-05 01:31:26 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
2016-09-14 16:47:32 +03:00
|
|
|
// And we shouldn't have already forgotten our context.
|
|
|
|
MOZ_ASSERT(data->mContext);
|
2009-05-07 00:46:04 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// But it may have shutdown already.
|
|
|
|
if (data->mCollector) {
|
2017-04-28 00:10:15 +03:00
|
|
|
data->mCollector->ClearCCJSRuntime();
|
2016-09-14 16:47:32 +03:00
|
|
|
data->mContext = nullptr;
|
2014-05-05 21:30:39 +04:00
|
|
|
} else {
|
2016-09-14 16:47:32 +03:00
|
|
|
data->mContext = nullptr;
|
2014-05-05 21:30:39 +04:00
|
|
|
delete data;
|
|
|
|
sCollectorData.set(nullptr);
|
|
|
|
}
|
2009-05-07 00:46:04 +04:00
|
|
|
}
|
|
|
|
|
2019-02-26 01:14:01 +03:00
|
|
|
/* static */
|
|
|
|
CycleCollectedJSContext* CycleCollectedJSContext::Get() {
|
2014-05-05 21:30:39 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
|
|
|
if (data) {
|
2016-09-14 16:47:32 +03:00
|
|
|
return data->mContext;
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
|
|
|
return nullptr;
|
2013-08-30 13:47:19 +04:00
|
|
|
}
|
|
|
|
|
2013-10-17 18:05:13 +04:00
|
|
|
MOZ_NEVER_INLINE static void SuspectAfterShutdown(
|
2014-05-13 21:41:38 +04:00
|
|
|
void* aPtr, nsCycleCollectionParticipant* aCp,
|
2013-10-17 18:05:13 +04:00
|
|
|
nsCycleCollectingAutoRefCnt* aRefCnt, bool* aShouldDelete) {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (aRefCnt->get() == 0) {
|
|
|
|
if (!aShouldDelete) {
|
|
|
|
// The CC is shut down, so we can't be in the middle of an ICC.
|
2017-07-29 01:24:17 +03:00
|
|
|
ToParticipant(aPtr, &aCp);
|
2014-05-05 21:30:39 +04:00
|
|
|
aRefCnt->stabilizeForDeletion();
|
2014-05-13 21:41:38 +04:00
|
|
|
aCp->DeleteCycleCollectable(aPtr);
|
2013-10-17 18:05:13 +04:00
|
|
|
} else {
|
2014-05-05 21:30:39 +04:00
|
|
|
*aShouldDelete = true;
|
2013-10-17 18:05:13 +04:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
} else {
|
|
|
|
// Make sure we'll get called again.
|
|
|
|
aRefCnt->RemoveFromPurpleBuffer();
|
|
|
|
}
|
2013-10-17 18:05:13 +04:00
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void NS_CycleCollectorSuspect3(void* aPtr, nsCycleCollectionParticipant* aCp,
|
|
|
|
nsCycleCollectingAutoRefCnt* aRefCnt,
|
2013-07-09 21:30:58 +04:00
|
|
|
bool* aShouldDelete) {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2012-06-12 21:06:20 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (MOZ_LIKELY(data->mCollector)) {
|
2014-05-13 21:41:38 +04:00
|
|
|
data->mCollector->Suspect(aPtr, aCp, aRefCnt);
|
2014-05-05 21:30:39 +04:00
|
|
|
return;
|
|
|
|
}
|
2014-05-13 21:41:38 +04:00
|
|
|
SuspectAfterShutdown(aPtr, aCp, aRefCnt, aShouldDelete);
|
2013-03-26 01:26:00 +04:00
|
|
|
}
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2017-08-22 00:01:47 +03:00
|
|
|
void ClearNurseryPurpleBuffer() {
|
|
|
|
MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
|
|
|
|
CollectorData* data = sCollectorData.get();
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
|
|
|
data->mCollector->SuspectNurseryEntries();
|
|
|
|
}
|
|
|
|
|
|
|
|
void NS_CycleCollectorSuspectUsingNursery(void* aPtr,
|
|
|
|
nsCycleCollectionParticipant* aCp,
|
|
|
|
nsCycleCollectingAutoRefCnt* aRefCnt,
|
|
|
|
bool* aShouldDelete) {
|
|
|
|
MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
|
|
|
|
if (!gNurseryPurpleBufferEnabled) {
|
|
|
|
NS_CycleCollectorSuspect3(aPtr, aCp, aRefCnt, aShouldDelete);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
SuspectUsingNurseryPurpleBuffer(aPtr, aCp, aRefCnt);
|
|
|
|
}
|
|
|
|
|
2013-03-26 01:26:00 +04:00
|
|
|
uint32_t nsCycleCollector_suspectedCount() {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
2013-06-18 23:02:16 +04:00
|
|
|
|
2018-07-30 18:48:17 +03:00
|
|
|
// When recording/replaying we do not collect cycles. Return zero here so
|
|
|
|
// that callers behave consistently between recording and replaying.
|
|
|
|
if (!data->mCollector || recordreplay::IsRecordingOrReplaying()) {
|
2014-05-05 21:30:39 +04:00
|
|
|
return 0;
|
|
|
|
}
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
return data->mCollector->SuspectedCount();
|
2013-03-26 01:26:00 +04:00
|
|
|
}
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2013-03-26 01:26:00 +04:00
|
|
|
bool nsCycleCollector_init() {
|
2016-02-26 18:52:08 +03:00
|
|
|
#ifdef DEBUG
|
|
|
|
static bool sInitialized;
|
2015-11-23 22:11:22 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
|
2015-11-23 22:11:22 +03:00
|
|
|
MOZ_ASSERT(!sInitialized, "Called twice!?");
|
|
|
|
sInitialized = true;
|
2016-02-26 18:52:08 +03:00
|
|
|
#endif
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
return sCollectorData.init();
|
2013-03-26 01:26:00 +04:00
|
|
|
}
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2017-04-24 23:54:27 +03:00
|
|
|
static nsCycleCollector* gMainThreadCollector;
|
|
|
|
|
2013-08-13 21:45:32 +04:00
|
|
|
void nsCycleCollector_startup() {
|
2014-05-05 21:30:39 +04:00
|
|
|
if (sCollectorData.get()) {
|
|
|
|
MOZ_CRASH();
|
|
|
|
}
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
CollectorData* data = new CollectorData;
|
|
|
|
data->mCollector = new nsCycleCollector();
|
2016-09-14 16:47:32 +03:00
|
|
|
data->mContext = nullptr;
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
sCollectorData.set(data);
|
2017-04-24 23:54:27 +03:00
|
|
|
|
|
|
|
if (NS_IsMainThread()) {
|
|
|
|
MOZ_ASSERT(!gMainThreadCollector);
|
|
|
|
gMainThreadCollector = data->mCollector;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void nsCycleCollector_registerNonPrimaryContext(CycleCollectedJSContext* aCx) {
|
|
|
|
if (sCollectorData.get()) {
|
|
|
|
MOZ_CRASH();
|
|
|
|
}
|
|
|
|
|
|
|
|
MOZ_ASSERT(gMainThreadCollector);
|
|
|
|
|
|
|
|
CollectorData* data = new CollectorData;
|
|
|
|
|
|
|
|
data->mCollector = gMainThreadCollector;
|
|
|
|
data->mContext = aCx;
|
|
|
|
|
|
|
|
sCollectorData.set(data);
|
|
|
|
}
|
|
|
|
|
|
|
|
void nsCycleCollector_forgetNonPrimaryContext() {
|
|
|
|
CollectorData* data = sCollectorData.get();
|
|
|
|
|
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
// And we shouldn't have already forgotten our context.
|
|
|
|
MOZ_ASSERT(data->mContext);
|
|
|
|
// We should not have shut down the cycle collector yet.
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
|
|
|
|
|
|
|
delete data;
|
|
|
|
sCollectorData.set(nullptr);
|
2010-11-12 01:52:30 +03:00
|
|
|
}
|
|
|
|
|
2012-01-14 20:58:05 +04:00
|
|
|
void nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB) {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
data->mCollector->SetBeforeUnlinkCallback(aCB);
|
2012-01-14 20:58:05 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void nsCycleCollector_setForgetSkippableCallback(
|
|
|
|
CC_ForgetSkippableCallback aCB) {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
data->mCollector->SetForgetSkippableCallback(aCB);
|
2012-01-14 20:58:05 +04:00
|
|
|
}
|
|
|
|
|
2017-06-30 13:44:59 +03:00
|
|
|
void nsCycleCollector_forgetSkippable(js::SliceBudget& aBudget,
|
|
|
|
bool aRemoveChildlessNodes,
|
2013-07-27 14:48:45 +04:00
|
|
|
bool aAsyncSnowWhiteFreeing) {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2018-05-19 00:23:33 +03:00
|
|
|
AUTO_PROFILER_LABEL("nsCycleCollector_forgetSkippable", GCCC);
|
2014-05-24 01:12:29 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
TimeLog timeLog;
|
2017-06-30 13:44:59 +03:00
|
|
|
data->mCollector->ForgetSkippable(aBudget, aRemoveChildlessNodes,
|
2014-05-05 21:30:39 +04:00
|
|
|
aAsyncSnowWhiteFreeing);
|
|
|
|
timeLog.Checkpoint("ForgetSkippable()");
|
2012-01-14 20:58:05 +04:00
|
|
|
}
|
|
|
|
|
Bug 1203840 - Trigger dirty pages purge after CC. r=njn,r=smaug,r=mccr8
Jemalloc 4 purges dirty pages regularly during free() when the ratio of dirty
pages compared to active pages is higher than 1 << lg_dirty_mult. We set
lg_dirty_mult in jemalloc_config to limit RSS usage, but it also has an impact
on performance.
So instead of enforcing a high ratio to force more pages being purged, we keep
jemalloc's default ratio of 8, and force a regular purge of all dirty pages,
after cycle collection.
Keeping jemalloc's default ratio avoids cycle-collection-triggered purge to
have to go through really all dirty pages when there are a lot, in which case
the normal jemalloc purge during free() will already have kicked in. It also
takes care of everything that doesn't run the cycle collector still having
a level of purge, like plugins in the plugin-container.
At the same time, since jemalloc_purge_freed_pages does nothing with jemalloc 4,
repurpose the MEMORY_FREE_PURGED_PAGES_MS telemetry probe to track the time
spent in this cycle-collector-triggered purge.
2015-09-11 08:12:21 +03:00
|
|
|
void nsCycleCollector_dispatchDeferredDeletion(bool aContinuation,
|
|
|
|
bool aPurge) {
|
2017-04-28 00:10:15 +03:00
|
|
|
CycleCollectedJSRuntime* rt = CycleCollectedJSRuntime::Get();
|
|
|
|
if (rt) {
|
|
|
|
rt->DispatchDeferredDeletion(aContinuation, aPurge);
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2013-08-04 03:55:39 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
bool nsCycleCollector_doDeferredDeletion() {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-08-04 03:55:39 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2016-09-14 16:47:32 +03:00
|
|
|
MOZ_ASSERT(data->mContext);
|
2013-08-04 03:55:39 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
return data->mCollector->FreeSnowWhite(false);
|
2013-07-09 21:30:58 +04:00
|
|
|
}
|
|
|
|
|
2018-08-08 09:14:58 +03:00
|
|
|
bool nsCycleCollector_doDeferredDeletionWithBudget(js::SliceBudget& aBudget) {
|
|
|
|
CollectorData* data = sCollectorData.get();
|
|
|
|
|
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
|
|
|
MOZ_ASSERT(data->mContext);
|
|
|
|
|
|
|
|
return data->mCollector->FreeSnowWhiteWithBudget(aBudget);
|
|
|
|
}
|
|
|
|
|
2014-05-13 21:13:00 +04:00
|
|
|
already_AddRefed<nsICycleCollectorLogSink> nsCycleCollector_createLogSink() {
|
|
|
|
nsCOMPtr<nsICycleCollectorLogSink> sink = new nsCycleCollectorLogSinkToFile();
|
|
|
|
return sink.forget();
|
|
|
|
}
|
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
void nsCycleCollector_collect(nsICycleCollectorListener* aManualListener) {
|
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2013-03-26 01:26:00 +04:00
|
|
|
|
2018-05-19 00:23:33 +03:00
|
|
|
AUTO_PROFILER_LABEL("nsCycleCollector_collect", GCCC);
|
2014-05-24 01:12:29 +04:00
|
|
|
|
2015-08-11 18:42:24 +03:00
|
|
|
SliceBudget unlimitedBudget = SliceBudget::unlimited();
|
2014-05-05 21:30:39 +04:00
|
|
|
data->mCollector->Collect(ManualCC, unlimitedBudget, aManualListener);
|
2013-11-21 02:35:17 +04:00
|
|
|
}
|
2011-01-14 13:06:09 +03:00
|
|
|
|
2014-11-27 14:47:51 +03:00
|
|
|
void nsCycleCollector_collectSlice(SliceBudget& budget,
|
|
|
|
bool aPreferShorterSlices) {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-11-21 02:35:17 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
// We should have started the cycle collector by now.
|
|
|
|
MOZ_ASSERT(data);
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2011-01-14 13:06:09 +03:00
|
|
|
|
2018-05-19 00:23:33 +03:00
|
|
|
AUTO_PROFILER_LABEL("nsCycleCollector_collectSlice", GCCC);
|
2014-05-24 01:12:29 +04:00
|
|
|
|
2014-11-27 14:47:51 +03:00
|
|
|
data->mCollector->Collect(SliceCC, budget, nullptr, aPreferShorterSlices);
|
2010-11-12 01:52:30 +03:00
|
|
|
}
|
|
|
|
|
2013-12-06 22:17:20 +04:00
|
|
|
void nsCycleCollector_prepareForGarbageCollection() {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
MOZ_ASSERT(data);
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (!data->mCollector) {
|
|
|
|
return;
|
|
|
|
}
|
2013-12-06 22:17:20 +04:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
data->mCollector->PrepareForGarbageCollection();
|
2013-12-06 22:17:20 +04:00
|
|
|
}
|
|
|
|
|
2014-05-07 04:25:26 +04:00
|
|
|
void nsCycleCollector_finishAnyCurrentCollection() {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2014-05-07 04:25:26 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
MOZ_ASSERT(data);
|
2014-05-07 04:25:26 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
if (!data->mCollector) {
|
|
|
|
return;
|
|
|
|
}
|
2014-05-07 04:25:26 +04:00
|
|
|
|
2014-05-13 21:41:38 +04:00
|
|
|
data->mCollector->FinishAnyCurrentCollection();
|
2014-05-07 04:25:26 +04:00
|
|
|
}
|
|
|
|
|
2016-09-08 23:04:30 +03:00
|
|
|
void nsCycleCollector_shutdown(bool aDoCollect) {
|
2014-05-13 21:41:38 +04:00
|
|
|
CollectorData* data = sCollectorData.get();
|
2010-11-12 01:52:30 +03:00
|
|
|
|
2014-05-05 21:30:39 +04:00
|
|
|
if (data) {
|
|
|
|
MOZ_ASSERT(data->mCollector);
|
2018-05-19 00:23:33 +03:00
|
|
|
AUTO_PROFILER_LABEL("nsCycleCollector_shutdown", OTHER);
|
2014-05-24 01:12:29 +04:00
|
|
|
|
2017-04-24 23:54:27 +03:00
|
|
|
if (gMainThreadCollector == data->mCollector) {
|
|
|
|
gMainThreadCollector = nullptr;
|
|
|
|
}
|
2016-09-08 23:04:30 +03:00
|
|
|
data->mCollector->Shutdown(aDoCollect);
|
2014-05-05 21:30:39 +04:00
|
|
|
data->mCollector = nullptr;
|
2016-09-14 16:47:32 +03:00
|
|
|
if (data->mContext) {
|
2016-01-13 01:37:57 +03:00
|
|
|
// Run any remaining tasks that may have been enqueued via
|
2018-07-19 19:38:33 +03:00
|
|
|
// RunInStableState or DispatchToMicroTask during the final cycle
|
|
|
|
// collection.
|
2016-09-14 16:47:32 +03:00
|
|
|
data->mContext->ProcessStableStateQueue();
|
2018-07-19 19:38:33 +03:00
|
|
|
data->mContext->PerformMicroTaskCheckPoint(true);
|
2016-01-13 01:37:57 +03:00
|
|
|
}
|
2016-09-14 16:47:32 +03:00
|
|
|
if (!data->mContext) {
|
2014-05-05 21:30:39 +04:00
|
|
|
delete data;
|
|
|
|
sCollectorData.set(nullptr);
|
2010-11-12 01:52:30 +03:00
|
|
|
}
|
2014-05-05 21:30:39 +04:00
|
|
|
}
|
2010-11-12 01:52:30 +03:00
|
|
|
}
|