This patch refactors the nsThread event queue to clean it up and to make it easier to restructure. The fundamental concepts are as follows:
Each nsThread will have a pointer to a refcounted SynchronizedEventQueue. A SynchronizedEQ takes care of doing the locking and condition variable work when posting and popping events. For the actual storage of events, it delegates to an AbstractEventQueue data structure. It keeps a UniquePtr to the AbstractEventQueue that it uses for storage.
Both SynchronizedEQ and AbstractEventQueue are abstract classes. There is only one concrete implementation of SynchronizedEQ in this patch, which is called ThreadEventQueue. ThreadEventQueue uses locks and condition variables to post and pop events the same way nsThread does. It also encapsulates the functionality that DOM workers need to implement their special event loops (PushEventQueue and PopEventQueue). In later Quantum DOM work, I plan to have another SynchronizedEQ implementation for the main thread, called SchedulerEventQueue. It will have special code for the cooperatively scheduling threads in Quantum DOM.
There are two concrete implementations of AbstractEventQueue in this patch: EventQueue and PrioritizedEventQueue. EventQueue replaces the old nsEventQueue. The other AbstractEventQueue implementation is PrioritizedEventQueue, which uses multiple queues for different event priorities.
The final major piece here is ThreadEventTarget, which splits some of the code for posting events out of nsThread. Eventually, my plan is for multiple cooperatively scheduled nsThreads to be able to share a ThreadEventTarget. In this patch, though, each nsThread has its own ThreadEventTarget. The class's purpose is just to collect some related code together.
One final note: I tried to avoid virtual dispatch overhead as much as possible. Calls to SynchronizedEQ methods do use virtual dispatch, since I plan to use different implementations for different threads with Quantum DOM. But all the calls to EventQueue methods should be non-virtual. Although the methods are declared virtual, all the classes used are final and the concrete classes involved should all be known through templatization.
MozReview-Commit-ID: 9Evtr9oIJvx
We want to ensure that nsThread's use of nsEventQueue uses locking done
in nsThread instead of nsEventQueue, for efficiency's sake: we only need
to lock once in nsThread, rather than the current situation of locking
in nsThread and additionally in nsEventQueue. With the current
structure of nsEventQueue, that would mean that nsThread should be using
a Monitor internally, rather than a Mutex.
Which would be well and good, except that DOM workers use nsThread's
mutex to protect their own, internal CondVar. Switching nsThread to use
a Monitor would mean that either:
- DOM workers drop their internal CondVar in favor of nsThread's
Monitor-owned CondVar. This change seems unlikely to work out well,
because now the Monitor-owned CondVar is performing double duty:
tracking availability of events in nsThread's event queue and
additionally whatever DOM workers were using a CondVar for. Having a
single CondVar track two things in such a fashion is for Experts Only.
- DOM workers grow their own Mutex to protect their own CondVar. Adding
a mutex like this would change locking in subtle ways and seems
unlikely to lead to success.
Using a Monitor in nsThread is therefore untenable, and we would like to
retain the current Mutex that lives in nsThread. Therefore, we need to
have nsEventQueue manage its own condition variable and push the
required (Mutex) locking to the client of nsEventQueue. This scheme
also seems more fitting: external clients merely need synchronized
access to the event queue; the details of managing notifications about
events in the event queue should be left up to the event queue itself.
Doing so also forces us to merge nsEventQueueBase and nsEventQueue:
there's no way to have nsEventQueueBase require an externally-defined
Mutex and then have nsEventQueue subclass nsEventQueueBase and provide
its own Mutex to the superclass. C++ initialization rules (and the way
things like CondVar are constructed) simply forbid it. But that's OK,
because we want a world where nsEventQueue is externally locked anyway,
so there's no reason to have separate classes here.
One casualty of this work is removing ChaosMode support from
nsEventQueue. nsEventQueue had support to delay placing events into the
queue, theoretically giving other threads the chance to put events there
first. Unfortunately, since the thread would have been holding a lock
(as is evident from the MutexAutoLock& parameter required), sleeping in
PutEvent accomplishes nothing but delaying the thread from getting
useful work done. We should support this, but it's complicated to
figure out how to reasonably support this right now.
A wrinkle in this overall pleasant refactoring is that nsThreadPool's
threads wait for limited amounts of time for new events to be placed in
the event queue, so that they can shut themselves down if no new events
are appearing. Setting limits on the number of threads also needs to be
able to wake up all threads, so threads can shut themselves down if
necessary.
Unfortunately, with the transition to nsEventQueue managing its own
condition variable, there's no way for nsThreadPool to perform these
functions, since there's no Monitor to wait on. Therefore, we add a
private API for accessing the condition variable and performing the
tasks nsThreadPool needs.
Prior to all the previous patches, placing items in an nsThread's event
queue required three lock/unlock pairs: one for nsThread's Mutex, one to
enter nsEventQueue's ReentrantMonitor, and one to exit nsEventQueue's
ReentrantMonitor. The upshot of all this work is that we now only
require one lock/unlock pair in nsThread itself, as things should be.
Like the previous patch, this patch is a no-op change in terms of
functionality. It does, however, pave part of the way for forcing
clients of nsEventQueue to provide their own locking.
There's no reason nsThreadPool needs to use a reentrant monitor for
locking its event queue. Having it use a non-reentrant one should be
slightly more efficient, both in the general operation of the monitor,
and that we're not performing redundant locking in methods like
nsThreadPool::Run. This change also eliminates the only usage of
nsEventQueue::GetReentrantMonitor.
The way idle nsThreadPool threads wait with a timeout doesn't work well with
shutdown from nsThreadManager.
nsThreadPool doesn't need to use nsIThread for its threads. The nsIThread
interface is not useful for threads running in an nsThreadPool.
The nsIEventTarget on the nsIThreadPool should be used for dispatching events,
not an interface on the individual threads, and the threads don't need an
nsEventQueue because they use the nsEventQueue on the nsThreadPool.
Shutdown of single event threads is easier than running nested event loops for
nsIThreads, avoiding the multilevel nested event loop situations when several
threads finish and are shutdown. While the ThreadFunc is running, a nested
event loop is still used in Shutdown() in case some consumers might need it
and because that is the documented API.
This also simplifies thread creation, avoiding races that could mean there was
temporarily an extra thread or more.
--HG--
extra : transplant_source : %F7%14%16%12%EF%E9%84%19D%26%3C%FE%1F%EC%FF%A3%BAG%C4%F3