We want to ensure that nsThread's use of nsEventQueue uses locking done
in nsThread instead of nsEventQueue, for efficiency's sake: we only need
to lock once in nsThread, rather than the current situation of locking
in nsThread and additionally in nsEventQueue. With the current
structure of nsEventQueue, that would mean that nsThread should be using
a Monitor internally, rather than a Mutex.
Which would be well and good, except that DOM workers use nsThread's
mutex to protect their own, internal CondVar. Switching nsThread to use
a Monitor would mean that either:
- DOM workers drop their internal CondVar in favor of nsThread's
Monitor-owned CondVar. This change seems unlikely to work out well,
because now the Monitor-owned CondVar is performing double duty:
tracking availability of events in nsThread's event queue and
additionally whatever DOM workers were using a CondVar for. Having a
single CondVar track two things in such a fashion is for Experts Only.
- DOM workers grow their own Mutex to protect their own CondVar. Adding
a mutex like this would change locking in subtle ways and seems
unlikely to lead to success.
Using a Monitor in nsThread is therefore untenable, and we would like to
retain the current Mutex that lives in nsThread. Therefore, we need to
have nsEventQueue manage its own condition variable and push the
required (Mutex) locking to the client of nsEventQueue. This scheme
also seems more fitting: external clients merely need synchronized
access to the event queue; the details of managing notifications about
events in the event queue should be left up to the event queue itself.
Doing so also forces us to merge nsEventQueueBase and nsEventQueue:
there's no way to have nsEventQueueBase require an externally-defined
Mutex and then have nsEventQueue subclass nsEventQueueBase and provide
its own Mutex to the superclass. C++ initialization rules (and the way
things like CondVar are constructed) simply forbid it. But that's OK,
because we want a world where nsEventQueue is externally locked anyway,
so there's no reason to have separate classes here.
One casualty of this work is removing ChaosMode support from
nsEventQueue. nsEventQueue had support to delay placing events into the
queue, theoretically giving other threads the chance to put events there
first. Unfortunately, since the thread would have been holding a lock
(as is evident from the MutexAutoLock& parameter required), sleeping in
PutEvent accomplishes nothing but delaying the thread from getting
useful work done. We should support this, but it's complicated to
figure out how to reasonably support this right now.
A wrinkle in this overall pleasant refactoring is that nsThreadPool's
threads wait for limited amounts of time for new events to be placed in
the event queue, so that they can shut themselves down if no new events
are appearing. Setting limits on the number of threads also needs to be
able to wake up all threads, so threads can shut themselves down if
necessary.
Unfortunately, with the transition to nsEventQueue managing its own
condition variable, there's no way for nsThreadPool to perform these
functions, since there's no Monitor to wait on. Therefore, we add a
private API for accessing the condition variable and performing the
tasks nsThreadPool needs.
Prior to all the previous patches, placing items in an nsThread's event
queue required three lock/unlock pairs: one for nsThread's Mutex, one to
enter nsEventQueue's ReentrantMonitor, and one to exit nsEventQueue's
ReentrantMonitor. The upshot of all this work is that we now only
require one lock/unlock pair in nsThread itself, as things should be.
Like the previous patch, this patch is a no-op change in terms of
functionality. It does, however, pave part of the way for forcing
clients of nsEventQueue to provide their own locking.
This patch is a no-op in terms of functionality. It ensures that we're
always holding nsThread's mutex when we touch mEvents, as dictated by
the comments. Putting this addition into its own patch will help make
the change to having nsEventQueue by guarded by a Mutex, rather than a
Monitor, somewhat clearer.
This is another case of an access to mEvents not being protected by
mLock. Future patches will make this locking requirement explicit in
nsChainedEventQueue, so we won't have problems like this. (Since
nsEventQueue has its own locking at this point, this omission didn't
matter much, but the omission will most certainly matter later.)
GetEvent was only called from one place, so it wasn't terribly useful as
an abstraction. It also broke the invariant that we protect accesses to
mEvents with mLock, as documented in nsThread.h. While upcoming patches
could have just updated GetEvent to do the necessary locking on its own,
it seemed just as easy to make the locking requirements at the callsite,
as will be done for other accesses to mEvents.
nsEventQueue's monitor does not require re-entrancy now that the monitor
is not externally visible. Since ReentrantMonitors require two separate
mutex lock/unlock pairs (one on entry, and one on exit), this cuts the
amount of locking nsEventQueue's methods do by half.
Jemalloc 4 purges dirty pages regularly during free() when the ratio of dirty
pages compared to active pages is higher than 1 << lg_dirty_mult. We set
lg_dirty_mult in jemalloc_config to limit RSS usage, but it also has an impact
on performance.
So instead of enforcing a high ratio to force more pages being purged, we keep
jemalloc's default ratio of 8, and force a regular purge of all dirty pages,
after cycle collection.
Keeping jemalloc's default ratio avoids cycle-collection-triggered purge to
have to go through really all dirty pages when there are a lot, in which case
the normal jemalloc purge during free() will already have kicked in. It also
takes care of everything that doesn't run the cycle collector still having
a level of purge, like plugins in the plugin-container.
At the same time, since jemalloc_purge_freed_pages does nothing with jemalloc 4,
repurpose the MEMORY_FREE_PURGED_PAGES_MS telemetry probe to track the time
spent in this cycle-collector-triggered purge.
The breakpad dependency in ThreadStackHelper is preventing us from
upgrading our in-tree copy to a newer version (bug 1069556). This patch
gets rid of that dependency. This makes native stack frames not work
for BHR, but because of the ftp.m.o decommissioning, native
symbolication was already broken and naive stack frames already don't
work, so we don't really lose anything from this patch.
Eventually we want to make ThreadStackHelper use other means of
unwinding, such as LUL for Linux
I added | #if 0 | around the code to fill the thread context, but left
the code in because I think we'll evenually want to reuse some of that
code.
This makes the order of |aDelay| and |aType| match those of the InitWith*()
functions.
I've made this change because the inconsistency tripped me up during the
development of part 4.
--HG--
extra : rebase_source : 7d49f3f643e76955ea3de57e0954deb22cda3ddf
There are many sub-classes of nsExpirationTracker. In order to distinguish them
nicely in the logging of timer firings, it's necessary to manually name each
one. (This wouldn't be necessary if there was a way to stringify template
parameters, but there isn't.)
--HG--
extra : rebase_source : 89b99e9dbb2a806bd21145d04a5e023794643b61
This is similar to the solution adopted for bug 1190985, a race in
netwerk's DebugMutexAutoLock. A relaxed atomic tells tools like TSan
that we're OK with this variable being touched from multiple threads.
That it's only set from within a locked mutex should ensure whatever
memory barriers we need are executed so all threads have a consistent
view of what value it contains.
Getting rid of another |volatile| usage in the codebase is just a bonus.
nsEventQueue's HasPending event is defined to simply:
return GetEvent(false, nullptr);
So we can substitute HasPendingEvent for this particular GetEvent call
to make the code clearer.