The basic idea, suggested by Olli, is that we can try to get a runnable in
ThreadEventQueue::GetEvent, and if that does not produce anything unlock our
mutex, do whatever idle state updates we need to do, re-lock our mutex. Then
always we need to try getting a runnable again, because a non-idle runnable
might have gotten queued while we had the lock unlocked. So we can't sleep on
our mutex, in the mayWait case, unless we try to get a runnable again first.
My notes on the current (pre this patch) unlocking setup follow.
------------------------------------------------------------
There are four places where we currently unlock:
1) IdlePeriodState::GetIdleDeadlineInternal. Needed only when !aIsPeek, to
RequestIdleToken, which can do IPC. The only caller, via
GetDeadlineForIdleTask, is PrioritizedEventQueue::GetEvent and only when we
selected the idle or deferred queue. We need this to set the proper deadline
on the idle event. In the cases when this unlock happens we currently _never_
return an idle event, because if we got here that means that we do not have an
idle token.
2) IdlePeriodState::GetLocalIdleDeadline. Needs to unlock to get the idle
period hint. The can get called from GetIdleDeadlineInternal in _both_ cases:
peek and get. The callstack for the get case is covered above. The peek case
is called from PrioritizedEventQueue::HasReadyEvent which is called from
ThreadEventQueue::HasPendingEvent.
3) IdlePeriodState::SetPaused, because it sends an IPC message. This is only
called from EnsureIsPaused, which is called from:
- IdlePeriodState::GetIdleDeadlineInternal. Only in the !aIsPeek case.
- IdlePeriodState::RanOutOfTasks called from:
- PrioritizedEventQueue::GetEvent if we fell into the idle case and our
queues are empty.
- PrioritizedEventQueue::DidRunEvent if we are empty.
4) IdlePeriodState::ClearIdleToken because it sends an IPC message. This is
called from:
- IdlePeriodState::RanOutOfTasks; see SetPaused.
- IdlePeriodState::GetIdleDeadlineInternal like EnsureIsPaused.
- IdlePeriodState::GetIdleToken if token is in the past. This is only
called from GetIdleDeadlineInternal, both cases.
- IdlePeriodState::FlagNotIdle called from PrioritizedEventQueue::GetEvent
if we find an event in a non-idle queue.
Or rewriting in terms of API entrypoints on IdlePeriodState that might need to
unlock:
* Anything to do with getting deadlines, whether we are peeking or getting.
Basically, if we need an updated deadline we need to unlock.
* When we have detected we are completely out of tasks (idle or not) to run.
Right now we do that when either we're asked for an event and don't have one
or if we run an event and are empty after that (before unlocking!). But the
unlocking or not happens in nsThreadEventQueue::DidRunEvent, so separately
from the getting of the event. In particular, we are unlocked before we
enter DidRunEvent, and unlock again before we return from it, so we can do
whatever updates we want there.
* When we have detected that we have a non-idle event to run; this calls
FlagNotIdle.
Differential Revision: https://phabricator.services.mozilla.com/D53631
--HG--
extra : moz-landing-system : lando
This change makes it possible to access the remote agent service
from C++ and Rust.
Differential Revision: https://phabricator.services.mozilla.com/D50288
--HG--
extra : moz-landing-system : lando
Most event queues don't ever get many events queued at one time, but the
MainThread Input and Normal queues may.
Differential Revision: https://phabricator.services.mozilla.com/D53912
--HG--
extra : moz-landing-system : lando
This should avoid freeing and reallocating the buffer every N events, and
make it simpler to use smaller buffers, especially for non-MainThread queues.
Differential Revision: https://phabricator.services.mozilla.com/D53911
--HG--
extra : moz-landing-system : lando
This change makes it possible to access the remote agent service
from C++ and Rust.
Differential Revision: https://phabricator.services.mozilla.com/D50288
--HG--
extra : moz-landing-system : lando
This will prevent the leak checker from reporting a missing log for
that process, which results in incorrect starring on TreeHerder. Both
of these failures should be detected as a failure.
Differential Revision: https://phabricator.services.mozilla.com/D53095
--HG--
extra : moz-landing-system : lando
The launcher process turns on the `PreferSystem32Images` mitigation policy for
the browser process. Since the mitigation policy is inherited, a process launched
by the browser process also has `PreferSystem32Images`. If an application which
does not support `PreferSystem32Images`, such as Skype for Business, is launched
via a hyperlink, a custom uri, or a downloaded file, it would fail to launch.
Bug 1567614 fixed this issue by introducing `mozilla::ShellExecuteByExplorer` to
`nsMIMEInfoWin::LoadUriInternal`. This patch introduces
`mozilla::ShellExecuteByExplorer` to two more places.
1. xul!nsLocalFile::Launch
This is invoked when a user opens a file from the Download Library, or a user
opens a downloaded file with the default application without saving it.
2. xul!nsMIMEInfoWin::LaunchWithFile
This is invoked when a user opens a downloaded file with a custom application
(configured in about:preference) without saving it.
*Why does this patch change worker.js?*
The mochitest dom/tests/browser/browser_test_new_window_from_content.js failed
if it was executed after dom/serviceworkers/test/browser_download.js in the
same batch. This was because browser_download.js launched Notepad to open
fake_download.bin.txt, preventing a new window from being opened in the
foreground in browser_test_new_window_from_content.js.
The test browser_download.js can verify downloaded data without opening an
associated application. So this patch adds the content-type to the response
header in order not to open Notepad on Windows.
Differential Revision: https://phabricator.services.mozilla.com/D52567
--HG--
extra : moz-landing-system : lando
Some kinds of leaks prevent NSS to not shut down properly, and right
now if that happens we crash. This interferes with our existing leak
checking code, so in this patch I make it not crash if we are already
leak checking. The failure is reported as a fake leak, so the test
should still fail when testing.
Differential Revision: https://phabricator.services.mozilla.com/D52914
--HG--
extra : moz-landing-system : lando
This adds two AUTO_PROFILER_LABEL_DYNAMIC_... macros and updates select
usages of the old macros to use the new ones. These new macros cause
the dynamic string of the label to be included in BHR stacks.
We don't want to do this all of the time, as in many cases we may not
be interested enough in the dynamic string or it may be sensitive
information, but it is rather important information for certain cases.
This uses the same buffer that we use for the strings for JS frames,
and if we fail to fit into that buffer we just append the raw label.
If the string is too long for our static buffer (128 bytes), we just
leave it truncated, as it should be stable and we may be able to infer
from the truncated form what the full form would be.
Differential Revision: https://phabricator.services.mozilla.com/D51665
--HG--
extra : moz-landing-system : lando
Needed to compare with `nsTextFragment::Get1b()` which returns latin1-encoded
characters. Used in a subsequent review.
Differential Revision: https://phabricator.services.mozilla.com/D52343
--HG--
extra : moz-landing-system : lando
The formatting change presumably happens because clang-format treats
include guards differently.
Differential Revision: https://phabricator.services.mozilla.com/D52698
--HG--
extra : moz-landing-system : lando
The include guard needs to happen before any non-trivial tokens.
I guess this change made clang-format decide the other ifdefs aren't
actually nested, so it dropped the indents.
Differential Revision: https://phabricator.services.mozilla.com/D52691
--HG--
extra : moz-landing-system : lando
- Introduce `dom.webcomponents.elementInternals.enabled` for custom element's elementInternals.
- Implement disabledFeatures static field and disableInternals.
- Refactor get observedAttributes sequence.
Differential Revision: https://phabricator.services.mozilla.com/D52156
--HG--
extra : moz-landing-system : lando
- Introduce `dom.webcomponents.elementInternals.enabled` for custom element's elementInternals.
- Implement disabledFeatures static field and disableInternals.
- Refactor get observedAttributes sequence.
Differential Revision: https://phabricator.services.mozilla.com/D52156
--HG--
extra : moz-landing-system : lando
This patch changes the xpidl parser to generate the rust trait code for
methods that take or return a cenum value.
Previously this would return an error, which means that adding a method
that uses cenums to an existing interface could cause rust code that
implements that interface to fail to build.
The generated methods take or return u8/u16/u32 depending on the width of the
enum. While this is not optimal (the parameter could contain values that are
not actually part of the enum), this is similar to what we do for nsLoadFlags.
In the future it would be nice to generate code that actually checks the
values are present in the enum, and to use a typedef instead of a plain
unsigned int.
Differential Revision: https://phabricator.services.mozilla.com/D51838
--HG--
extra : moz-landing-system : lando
Threadpools run an event that then runs other events, so we need to tweak
things for GetRunningEventDelay()
Differential Revision: https://phabricator.services.mozilla.com/D44058
--HG--
extra : moz-landing-system : lando
This lets us determine the time that an event has been running, and the time
that the event spent queued - which can be used to figure out 'jank' at the
time the event was queued. For PrioritizedEventQueues, only if such queuing
would delay an input event then the queuing delay is reported.
Differential Revision: https://phabricator.services.mozilla.com/D41279
--HG--
extra : moz-landing-system : lando
This patch changes the xpidl parser to generate the rust trait code for
methods that take or return a cenum value.
Previously this would return an error, which means that adding a method
that uses cenums to an existing interface could cause rust code that
implements that interface to fail to build.
The generated methods take or return u8/u16/u32 depending on the width of the
enum. While this is not optimal (the parameter could contain values that are
not actually part of the enum), this is similar to what we do for nsLoadFlags.
In the future it would be nice to generate code that actually checks the
values are present in the enum, and to use a typedef instead of a plain
unsigned int.
Differential Revision: https://phabricator.services.mozilla.com/D51838
--HG--
extra : moz-landing-system : lando
The static analysis caught this for me in Bug 1593812, I was just to
dumb to actually apply this change prior to commit.
Differential Revision: https://phabricator.services.mozilla.com/D52170
--HG--
extra : moz-landing-system : lando
Threadpools run an event that then runs other events, so we need to tweak
things for GetRunningEventDelay()
Differential Revision: https://phabricator.services.mozilla.com/D44058
--HG--
extra : moz-landing-system : lando
This lets us determine the time that an event has been running, and the time
that the event spent queued - which can be used to figure out 'jank' at the
time the event was queued. For PrioritizedEventQueues, only if such queuing
would delay an input event then the queuing delay is reported.
Differential Revision: https://phabricator.services.mozilla.com/D41279
--HG--
extra : moz-landing-system : lando
We need some way of differentiating "tasks that just consume CPU"
vs. "tasks that block on some external resource" like reading from a
socket or a file. If we didn't have this, we'd either a) have a thread
pool sized for the number of CPUs where having all the threads blocked
on I/O--and therefore no new tasks are able to run--or b) have a thread
pool that tries to increase the number of working threads based on the
number of submitted tasks and winds up having too many tasks running
with not enough CPUs to run them on.
This flag enables us to theoretically get the best of both worlds: we
can set aside `~#CPUs` threads for CPU-intensive work, and
`$SOME_NUMBER` threads for I/O work. The latter number can be adjusted
up if the I/O load on the system is particularly heavy.
The implementation strategy of this patch is to use two separate thread
pools for the two different kinds of work. It's entirely possible that
we'll want to use a single thread pool to coordinate thread create
between the two kinds of work, or even migrate threads from one kind of
work to the other, but such improvements can be future work. The focus
right now is providing the rest of Gecko with a common funnel to put
tasks into, and we can adjust what's at the end of the funnel at a later
point.
Differential Revision: https://phabricator.services.mozilla.com/D51708
--HG--
extra : moz-landing-system : lando