The only reason NotifyRunnable::Dispatch needs a JSContext is so that it can call
ModifyBusyCount in Pre/PostDispatch. The only reason that needs a JSContext is
to call Cancel(), which only needs it to call Notify(), which only needs it to
call NotifyPrivate, which only needs it to dispatch a NotifyRunnable.
The only reason NotifyRunnable::Dispatch needs a JSContext is so that it can call
ModifyBusyCount in Pre/PostDispatch. The only reason that needs a JSContext is
to call Cancel(), which only needs it to call Notify(), which only needs it to
call NotifyPrivate, which only needs it to dispatch a NotifyRunnable.
RunExpiredTimeouts has "fudging" code to always ensure that we execute at least one timeout. This is intended to cover cases where an nsITimer fires slightly early, but it means we must be careful not to fire a timer more times than we intend to or we'll execute a timeout prematurely.
Consider a sequences of setTimeout calls alternating in delay between 0ms and 1000ms. When the 1000ms timeout fires, it schedules a 0ms timeout. The setTimeout call itself calls RescheduleTimeoutTimer, which schedules the timer for a 0 ms delay. And once we unwind the 1000ms timeout RunExpiredTimeouts will also schedule the timer for a 0 ms delay. If the timer has fired (remember, it's processed on a completely different thread) in the meantime, we ultimately will get two callbacks from nsITimer for our 0 ms timeout. The first will run the 0 ms timeout and schedule a 1000 ms timeout, and the second will run the 1000 ms timeout (remember, RunExpiredTimeouts always runs at least one timeout!) ~999 ms ahead of schedule.
The solution is to cancel the timer in RescheduleTimeoutTimer, so that when we call it the second time it will cause any pending events from the first scheduling to be canceled. But this actually doesn't work at all, because of how we use nsITimer. Before worker threads were capable of accepting arbitrary runnables we created TimerThreadEventTarget, which translates the timer firing to the special worker event queue when the timer thread attempts to *dispatch* a runnable to the worker. We still need this for some of the other types of timers (which use control runnables that interrupt JS, and not the regular event queue). But setTimeout can simply run like a normal nsITimer callback now. We need that here, or calling nsITimer::Cancel won't actually do anything, because the timer's event was ignored and TimerThreadEventTarget created its own event.
The bulk of this commit was generated with a script, executed at the top
level of a typical source code checkout. The only non-machine-generated
part was modifying MFBT's moz.build to reflect the new naming.
CLOSED TREE makes big refactorings like this a piece of cake.
# The main substitution.
find . -name '*.cpp' -o -name '*.cc' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
xargs perl -p -i -e '
s/nsRefPtr\.h/RefPtr\.h/g; # handle includes
s/nsRefPtr ?</RefPtr</g; # handle declarations and variables
'
# Handle a special friend declaration in gfx/layers/AtomicRefCountedWithFinalize.h.
perl -p -i -e 's/::nsRefPtr;/::RefPtr;/' gfx/layers/AtomicRefCountedWithFinalize.h
# Handle nsRefPtr.h itself, a couple places that define constructors
# from nsRefPtr, and code generators specially. We do this here, rather
# than indiscriminantly s/nsRefPtr/RefPtr/, because that would rename
# things like nsRefPtrHashtable.
perl -p -i -e 's/nsRefPtr/RefPtr/g' \
mfbt/nsRefPtr.h \
xpcom/glue/nsCOMPtr.h \
xpcom/base/OwningNonNull.h \
ipc/ipdl/ipdl/lower.py \
ipc/ipdl/ipdl/builtin.py \
dom/bindings/Codegen.py \
python/lldbutils/lldbutils/utils.py
# In our indiscriminate substitution above, we renamed
# nsRefPtrGetterAddRefs, the class behind getter_AddRefs. Fix that up.
find . -name '*.cpp' -o -name '*.h' -o -name '*.idl' | \
xargs perl -p -i -e 's/nsRefPtrGetterAddRefs/RefPtrGetterAddRefs/g'
if [ -d .git ]; then
git mv mfbt/nsRefPtr.h mfbt/RefPtr.h
else
hg mv mfbt/nsRefPtr.h mfbt/RefPtr.h
fi
--HG--
rename : mfbt/nsRefPtr.h => mfbt/RefPtr.h
When workers shut down we discard the event queue rather than running it to completion. Originally workers managed their event queue themselves and would simply iterate through the array of events and cancel them all. After bug 914762 this was done by setting a (thread-)global "canceling" flag and then calling NS_ProcessPendingEvents. But this neglects that a shut down request can be received while the worker is in a sync queue. In this case, calling NS_ProcessPendingEvents will process any events pending in the sync queue, which is *not* the queue we need to cancel.
The fix is, if we are in a sync queue when NotifyInternal is called, to defer clearing the queue until the top-most sync queue is destroyed and we are about to return to the regular event queue. Only then can we call NS_ProcessPendingEvents to clear out the queue. Because we can never process any events from this queue while sync queues are active, the timing of the mass cancellation is unchanged from the perspective of events in the regular queue.
When workers shut down we discard the event queue rather than running it to completion. Originally workers managed their event queue themselves and would simply iterate through the array of events and cancel them all. After bug 914762 this was done by setting a (thread-)global "canceling" flag and then calling NS_ProcessPendingEvents. But this neglects that a shut down request can be received while the worker is in a sync queue. In this case, calling NS_ProcessPendingEvents will process any events pending in the sync queue, which is *not* the queue we need to cancel.
The fix is, if we are in a sync queue when NotifyInternal is called, to defer clearing the queue until the top-most sync queue is destroyed and we are about to return to the regular event queue. Only then can we call NS_ProcessPendingEvents to clear out the queue. Because we can never process any events from this queue while sync queues are active, the timing of the mass cancellation is unchanged from the perspective of events in the regular queue.
--HG--
extra : rebase_source : f67fbee27c0751068a4e7aaf692cbfc1d3c9aa7c
Per the product discussion, the Notification API should be disabled in
ServiceWorker in release builds for 42 since the UX isn't great [1].
The aim is to release in 44.
Apologies for the code duplication for pref checking in Notification and
ServiceWorkerRegistration. There isn't a easy way to get
ServiceWorkerRegistration's generated binding to include Notification.h without
having an attribute/method that uses Notification.
[1]: https://mana.mozilla.org/wiki/x/TgAJAw
--HG--
extra : commitid : 5dtc2E63kuM
extra : rebase_source : 4265dcd154462aa4f3b915e9e898fe7b82bf9afc
Right now, synthetic Responses did not have a valid channel info. When these
were saved in the Cache, and then restored, the restored Response did have
a ChannelInfo, but that ChannelInfo did not have a valid security info.
Passing this to respondWith() then caused the interception to fail.
This patch modifies Response::Constructor() to initialize its ChannelInfo from
the global. ChannelInfo can now initialize itself from a nsIDocument. All
workers now store their ChannelInfo on the WorkerLoadInfo.
--HG--
extra : commitid : L1wltwPICd8
extra : rebase_source : 8dab4c414eb50e02a00dd2cb3ee848b811060e70