The bulk of this commit was generated with a script, executed at the top
level of a typical source code checkout. The only non-machine-generated
part was modifying MFBT's moz.build to reflect the new naming.
CLOSED TREE makes big refactorings like this a piece of cake.
# The main substitution.
find . -name '*.cpp' -o -name '*.cc' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
xargs perl -p -i -e '
s/nsRefPtr\.h/RefPtr\.h/g; # handle includes
s/nsRefPtr ?</RefPtr</g; # handle declarations and variables
'
# Handle a special friend declaration in gfx/layers/AtomicRefCountedWithFinalize.h.
perl -p -i -e 's/::nsRefPtr;/::RefPtr;/' gfx/layers/AtomicRefCountedWithFinalize.h
# Handle nsRefPtr.h itself, a couple places that define constructors
# from nsRefPtr, and code generators specially. We do this here, rather
# than indiscriminantly s/nsRefPtr/RefPtr/, because that would rename
# things like nsRefPtrHashtable.
perl -p -i -e 's/nsRefPtr/RefPtr/g' \
mfbt/nsRefPtr.h \
xpcom/glue/nsCOMPtr.h \
xpcom/base/OwningNonNull.h \
ipc/ipdl/ipdl/lower.py \
ipc/ipdl/ipdl/builtin.py \
dom/bindings/Codegen.py \
python/lldbutils/lldbutils/utils.py
# In our indiscriminate substitution above, we renamed
# nsRefPtrGetterAddRefs, the class behind getter_AddRefs. Fix that up.
find . -name '*.cpp' -o -name '*.h' -o -name '*.idl' | \
xargs perl -p -i -e 's/nsRefPtrGetterAddRefs/RefPtrGetterAddRefs/g'
if [ -d .git ]; then
git mv mfbt/nsRefPtr.h mfbt/RefPtr.h
else
hg mv mfbt/nsRefPtr.h mfbt/RefPtr.h
fi
--HG--
rename : mfbt/nsRefPtr.h => mfbt/RefPtr.h
When workers shut down we discard the event queue rather than running it to completion. Originally workers managed their event queue themselves and would simply iterate through the array of events and cancel them all. After bug 914762 this was done by setting a (thread-)global "canceling" flag and then calling NS_ProcessPendingEvents. But this neglects that a shut down request can be received while the worker is in a sync queue. In this case, calling NS_ProcessPendingEvents will process any events pending in the sync queue, which is *not* the queue we need to cancel.
The fix is, if we are in a sync queue when NotifyInternal is called, to defer clearing the queue until the top-most sync queue is destroyed and we are about to return to the regular event queue. Only then can we call NS_ProcessPendingEvents to clear out the queue. Because we can never process any events from this queue while sync queues are active, the timing of the mass cancellation is unchanged from the perspective of events in the regular queue.
When workers shut down we discard the event queue rather than running it to completion. Originally workers managed their event queue themselves and would simply iterate through the array of events and cancel them all. After bug 914762 this was done by setting a (thread-)global "canceling" flag and then calling NS_ProcessPendingEvents. But this neglects that a shut down request can be received while the worker is in a sync queue. In this case, calling NS_ProcessPendingEvents will process any events pending in the sync queue, which is *not* the queue we need to cancel.
The fix is, if we are in a sync queue when NotifyInternal is called, to defer clearing the queue until the top-most sync queue is destroyed and we are about to return to the regular event queue. Only then can we call NS_ProcessPendingEvents to clear out the queue. Because we can never process any events from this queue while sync queues are active, the timing of the mass cancellation is unchanged from the perspective of events in the regular queue.
--HG--
extra : rebase_source : f67fbee27c0751068a4e7aaf692cbfc1d3c9aa7c
This commit implements the following changes to get registration.https.html working.
1) Fail with NS_ERROR_DOM_SECURITY_ERR where the spec requires it.
2) Propagate JSExnType to ServiceWorkerManager::HandleError() so that a JS
exception object with the correct .name can be created.
3) Fail with security error on redirect failure.
4) Check fetched script's mimetype.
5) Add missing python server files for web-platform-tests.
6) Update web-platform-tests expected data.
7) Several tests have been changed to use TypeError or more appropriate JS
errors based on my reading of the spec.
--HG--
extra : commitid : IxWo2IVUweU
extra : rebase_source : c3c1ead153027bf84e7f239fd7125224fe25c3c0
This is motivated by three separate but related problems:
1. Our concept of recursion depth is broken for things that run from AfterProcessNextEvent observers (e.g. Promises). We decrement the recursionDepth counter before firing observers, so a Promise callback running at the lowest event loop depth has a recursion depth of 0 (whereas a regular nsIRunnable would be 1). This is a problem because it's impossible to distinguish a Promise running after a sync XHR's onreadystatechange handler from a top-level event (since the former runs with depth 2 - 1 = 1, and the latter runs with just 1).
2. The nsIThreadObserver mechanism that is used by a lot of code to run "after" the current event is a poor fit for anything that runs script. First, the order the observers fire in is the order they were added, not anything fixed by spec. Additionally, running script can cause the event loop to spin, which is a big source of pain here (bholley has some nasty bug caused by this).
3. We run Promises from different points in the code for workers and main thread. The latter runs from XPConnect's nsIThreadObserver callbacks, while the former runs from a hardcoded call to run Promises in the worker event loop. What workers do is particularly problematic because it means we can't get the right recursion depth no matter what we do to nsThread.
The solve this, this patch does the following:
1. Consolidate some handling of microtasks and all handling of stable state from appshell and WorkerPrivate into CycleCollectedJSRuntime.
2. Make the recursionDepth counter only available to CycleCollectedJSRuntime (and its consumers) and remove it from the nsIThreadInternal and nsIThreadObserver APIs.
3. Adjust the recursionDepth counter so that microtasks run with the recursionDepth of the task they are associated with.
4. Introduce the concept of metastable state to replace appshell's RunBeforeNextEvent. Metastable state is reached after every microtask or task is completed. This provides the semantics that bent and I want for IndexedDB, where transactions autocommit at the end of a microtask and do not "spill" from one microtask into a subsequent microtask. This differs from appshell's RunBeforeNextEvent in two ways:
a) It fires between microtasks, which was the motivation for starting this.
b) It no longer ensures that we're at the same event loop depth in the native event queue. bent decided we don't care about this.
5. Reorder stable state to happen after microtasks such as Promises, per HTML. Right now we call the regular thread observers, including appshell, before the main thread observer (XPConnect), so stable state tasks happen before microtasks.
client.focus() now directly uses the DOMServiceWorkerFocusClient event. The
platform popup checking is not available on service workers. Instead each
worker maintains a counter of if it is allowed to interact with windows. This
counter is currently only incremented by the notificationclick event and
dropped after the event has been dispatched.
Since acquiring a client is an async operation most service workers will
perform in notificationclick, an additional extension is granted after the event
during which the service worker may focus a client. This extension is only granted
if the script invokes NotificationEvent.waitUntil() at which point the timer begins.
The extension is terminated when the Promise passed to waitUntil() is fulfilled, or
the timer expires, whichever comes first.
--HG--
extra : commitid : 5wavKTRZWcy
extra : rebase_source : fc8ab4ef6c9bf384b5525b0bc979b3cedc4e1d6c
client.focus() now directly uses the DOMServiceWorkerFocusClient event. The
platform popup checking is not available on service workers. Instead each
worker maintains a counter of if it is allowed to interact with windows. This
counter is currently only incremented by the notificationclick event and
dropped after the event has been dispatched.
Since acquiring a client is an async operation most service workers will
perform in notificationclick, an additional extension is granted after the event
during which the service worker may focus a client. This extension is only granted
if the script invokes NotificationEvent.waitUntil() at which point the timer begins.
The extension is terminated when the Promise passed to waitUntil() is fulfilled, or
the timer expires, whichever comes first.
--HG--
extra : commitid : 5DUAOg4j41K
extra : rebase_source : 775d36e41c3d3b62e5189b032220fa2469ac237d
extra : source : 8300859f8e9751ca663f96fae3375dfda8b2ad13