PGamepadEventChannelParent will need to send an async message back to the
child with info about the new shared memory region upon creation.
The IPDL channel is not ready during allocation, so we need to defer
registering the actor with GamepadPlatformService until the IPDL
"RecvConstructor" message so the channel will be available.
This is just a straight refactor to prepare for the upcoming change.
Differential Revision: https://phabricator.services.mozilla.com/D105127
PGamepadEventChannelParent will need to send an async message back to the
child with info about the new shared memory region upon creation.
The IPDL channel is not ready during allocation, so we need to defer
registering the actor with GamepadPlatformService until the IPDL
"RecvConstructor" message so the channel will be available.
This is just a straight refactor to prepare for the upcoming change.
Differential Revision: https://phabricator.services.mozilla.com/D105127
PGamepadEventChannelParent will need to send an async message back to the
child with info about the new shared memory region upon creation.
The IPDL channel is not ready during allocation, so we need to defer
registering the actor with GamepadPlatformService until the IPDL
"RecvConstructor" message so the channel will be available.
This is just a straight refactor to prepare for the upcoming change.
Differential Revision: https://phabricator.services.mozilla.com/D105127
This patch wants to solve several quirks around the shutdown terminator.
- Use the same shutdown phase definitions in AppShutdown and nsTerminator. This touches quite a few files.
- Ensure that the terminator phase shift is handled before any shutdown observer notifications are sent and reduce its heartbeat duration.
- Add missing phases to the shutdown telemetry.
Please note that this changes the unit of "tick" to 100ms rather than 1s.
As a side effect, we also remove the obsolete "shutdown-persist" context.
While the existing test coverage continues to prove the most important functions, we acknowledge the wish for better test coverage with [[ https://bugzilla.mozilla.org/show_bug.cgi?id=1693966 | bug 1693966 ]].
Differential Revision: https://phabricator.services.mozilla.com/D103626
This patch wants to solve several quirks around the shutdown terminator.
- Use the same shutdown phase definitions in AppShutdown and nsTerminator. This touches quite a few files.
- Ensure that the terminator phase shift is handled before any shutdown observer notifications are sent and reduce its heartbeat duration.
- Add missing phases to the shutdown telemetry.
Please note that this changes the unit of "tick" to 100ms rather than 1s.
As a side effect, we also remove the obsolete "shutdown-persist" context.
While the existing test coverage continues to prove the most important functions, we acknowledge the wish for better test coverage with [[ https://bugzilla.mozilla.org/show_bug.cgi?id=1693966 | bug 1693966 ]].
Differential Revision: https://phabricator.services.mozilla.com/D103626
This makes the naming more consistent with other functions called
Insert and/or Update. Also, it removes the ambiguity whether
Put expects that an entry already exists or not, in particular because
it differed from nsTHashtable::PutEntry in that regard.
Differential Revision: https://phabricator.services.mozilla.com/D105473
In bug 1691415 we're getting failures to deserialize. It would be
nice to move these closer to the source so that we can better understand
what's going on.
Differential Revision: https://phabricator.services.mozilla.com/D105688
We know that some GV installations (particularly but not exlcusively Focus) are
failing to load `libxul.so` during early Gecko bootstrapping. Unfortunately
a boolean pass/fail result is not giving us sufficient information to be able to
properly troubleshoot this problem.
This patch adds `mozilla::Result`-based return values to `XPCOMGlueLoad` and
`GetBootstrap` in an effort to produce more actionable information about these
failures.
We include either a `nsresult` or, if the failure is rooted in a dynamic linker
failure, appropriate platform-specific error information:
* On Unix-based platforms, a `UniqueFreePtr<char>` containing the string from `dlerror(3)`;
* On Windows, the Win32 `DWORD` error code from `GetLastError()`.
For non-Android platforms, I updated them to handle the new return type, but
otherwise did not make any further changes.
For Android, we include the error information in the message string that we pass
into the Java `Exception` that is subsequently thrown.
Differential Revision: https://phabricator.services.mozilla.com/D104263
This point in the code is the end of receiving a message, not the start
(note the `if (partial) { break; }` above), so it should be marked
accordingly. The profiler frontend is expecting the end marker;
currently two of the four reported time intervals are `unknown` on
Windows, and this patch fixes that.
(Recording the receiving start time is complicated, because we don't
have a `Messsage` object until we've read the buffer with the (end of
the) header, and it might make more sense to timestamp it before the
first receive operation. Currently, neither channel implementation
attempts this.)
Differential Revision: https://phabricator.services.mozilla.com/D105094
Currently, we add an `eSending`/`TransferStart` marker every time we
send data as part of a message, rather than only the first time. The
profiler frontend appears to ignore the extra markers (the displayed time
intervals look reasonable), but it's wasteful: on large messages it can
consume enough CPU time to appear in the profile itself.
Differential Revision: https://phabricator.services.mozilla.com/D105093
There are no code changes, only #include changes.
It was a fairly mechanical process: Search for all "AUTO_PROFILER_LABEL", and in each file, if only labels are used, convert "GeckoProfiler.h" into "ProfilerLabels.h" (or just add that last one where needed).
In some files, there were also some marker calls but no other profiler-related calls, in these cases "GeckoProfiler.h" was replaced with both "ProfilerLabels.h" and "ProfilerMarkers.h", which still helps in reducing the use of the all-encompassing "GeckoProfiler.h".
Differential Revision: https://phabricator.services.mozilla.com/D104588
The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
Previously, there was undefined behaviour when validating enum values in
EnumSerializer, which were actually invalid (which was hit during fuzzing, e.g.),
because the invalid integral value was cast to the enum type for comparison.
This patch changes the comparison to cast the valid values to their integral
values instead and compare those.
Differential Revision: https://phabricator.services.mozilla.com/D103449
The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
Applying a bulk filter on accessibles in content process allows us to avoid a potentially large (and variable) number of IPC sync calls to retrieve the accessible names. I chose to implement this as a "post filter" and not to actually do the entire search in content because it would cause a lot of duplication of code for non-IPC searching, and we wouldn't have the flexibility to combine a text search with any arbitrary search key as the API requires.
I also generalized the RangeTypes.h header to PlatformExtTypes so it can be used to define filter types as well.
Differential Revision: https://phabricator.services.mozilla.com/D100730
Bug 1583109 introduced new function templates StringJoin and StringJoinAppend.
These are now used to replace several custom loops across the codebase that
implement string-joining algorithms to simplify the code.
Differential Revision: https://phabricator.services.mozilla.com/D98750
Bug 1583109 introduced new function templates StringJoin and StringJoinAppend.
These are now used to replace several custom loops across the codebase that
implement string-joining algorithms to simplify the code.
Differential Revision: https://phabricator.services.mozilla.com/D98750
Passing a union here allows us to reuse code and trim some code which was
duplicated to handle the different NodeId formats. This also consolidates the
former `NodeId` and `NodeIdData` structures into a new structure (working name
`NodeIdParts`) which represents parts that will later be converted to a string
based NodeId.
Differential Revision: https://phabricator.services.mozilla.com/D93569
This moves parts of IPCMessageUtils.h to two new header files and adapts
the include directives as necessary. The new header files are:
- EnumSerializer.h, which defines the templates for enum serializers
- IPCMessageUtilsSpecializations.h, which defines template specializations
of ParamTraits with extra dependencies (building upon both IPCMessageUtils.h
and EnumSerializer.h)
This should minimize the dependencies pulled in by every consumer of
IPCMessageUtils.h
Differential Revision: https://phabricator.services.mozilla.com/D94459
### Story
While the exisiting top browsing context is replaced, SessionStorage relies on
itself been propagated by SessionStore. However, this has been disabled in
SessionHistory in the parent process.
Therefore, we need to propagate SessionStorage data by propagating
BackgroundSessionStorageManager.
### Notes
This patch assumes that the target top-level browsing context shouldn't have
been registered to BackgroundSessionStorageManager. If this can happen, we will
either need to merge SessionStorage data into it or find a proper (earlier)
place to update sManagers.
### Test Plan
Test: D97763
Try: https://treeherder.mozilla.org/jobs?repo=try&revision=2acc7b393fb80b640f4fbe3ade1da7dd440c380e&selectedTaskRun=KK6XhR-sQuqv5lcntVLc2w.0
Differential Revision: https://phabricator.services.mozilla.com/D98082
Classes that inherit from DiscardableRunnable are only promising that it is OK
for Run() to be skipped, rather than promising that Cancel() is effective.
Differential Revision: https://phabricator.services.mozilla.com/D98117
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. Do that by transimitting the vsync rate `SendNotify()`.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Depends on D98254
Differential Revision: https://phabricator.services.mozilla.com/D93173
When using an x64 GMP child process with an arm64 parent process on arm64 Mac's, use a 16k Shmem pagesize in the child process.
Differential Revision: https://phabricator.services.mozilla.com/D98241
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. Do that by transimitting the vsync rate `SendNotify()`.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. Do that by transimitting the vsync rate `SendNotify()`.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. When using PVsync, limit updates to once in every
250ms in order to minimize overhead while still updating fast.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. When using PVsync, limit updates to once in every
250ms in order to minimize overhead while still updating fast.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. When using PVsync, limit updates to once in every
250ms in order to minimize overhead while still updating fast.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
TimeStamps in markers must now be streamed through `SpliceableJSONWriter::TimeProperty(name, timestamp)`.
This is consistent with all other JSON-writing functions being in `SpliceableJSONWriter` (and base class `JSONWriter`).
Depends on D97556
Differential Revision: https://phabricator.services.mozilla.com/D97557
Check `profiler_is_locked_on_current_thread()` before recording an IPC marker.
This removes the deadlock found in bug 1675406:
- SamplerThread: During sampling, some data is recorded into the profile buffer, which locks ProfileChunkedBuffer::mMutex, this triggers some chunk updates that are sent to ProfileBufferGlobalController, which attempts to lock its mutex.
- Main thread: An IPC with an update arrives, ProfileBufferGlobalController locks its mutex, then it sends an IPC out, this records a marker into ProfileChunkedBuffer, which attempts to lock its mMutex.
With this patch and bug 1671403, that last IPC will not record a marker anymore.
Differential Revision: https://phabricator.services.mozilla.com/D96971
* We add a new function to `mscom/Utils.h`: `DiagnosticNameForIID`. Its
purpose is to generate a friendly name for an interface, given an IID.
For special interfaces internal to COM, we add our own descriptive strings.
If the interface does not have a description, we simply convert the IID
to string format using GUIDToString.
* We modify `ProfilerMarkerChannelHook` to include the additional diagnostic
information for IIDs in its markers.
* Since each marker is now differentiated by IID, we remove the restriction
that we only use markers for the outermost COM call. In particular, this
assumption doesn't hold for asynchronous COM calls, so we would be losing
data in the case where an async call was pending while the main thread was
still making COM calls on other interfaces.
* There isn't really an effective way to distinguish between sync and
async calls at the channel hook layer. I'm thinking about how we could
perhaps modify `AsyncInvoker` to help mark these, but it's a bit messy.
I'm going to postpone that to future work.
* Other potential future work is expanding the number of interfaces for which
we have frendly names. I could see us annotating our various COM interfaces
in a way that we could automagically generate human-readable descriptions for
those interfaces.
Differential Revision: https://phabricator.services.mozilla.com/D97042
I realized that calling `mscom::IsProxy` is kind of redundant when we already
need to query for `ICallFactory`.
We still allow `aIsProxy` as an optional constructor argument, but if not
present then we go straight to `QueryInterface(IID_ICallFactory)`.
Differential Revision: https://phabricator.services.mozilla.com/D97047
We can now chaincreation of the decoder to the launch of the RDD process as needed and setting up the PRemoteDecoderManager
Differential Revision: https://phabricator.services.mozilla.com/D96365
PDMFactory::CreateDecoder is changed to return a MozPromise that will contain the MediaDataDecoder once created.
This will allow to later make RemoteDecoderManager fully asynchronous and no longer require an IPC sync call to start the RDD process.
We also modify the WebrtcMediaDataDecoderCodec to never create a decoder on the main thread, which could cause deadlocks under some circumstances.
Differential Revision: https://phabricator.services.mozilla.com/D96364
We can now chaincreation of the decoder to the launch of the RDD process as needed and setting up the PRemoteDecoderManager
Differential Revision: https://phabricator.services.mozilla.com/D96365
PDMFactory::CreateDecoder is changed to return a MozPromise that will contain the MediaDataDecoder once created.
This will allow to later make RemoteDecoderManager fully asynchronous and no longer require an IPC sync call to start the RDD process.
We also modify the WebrtcMediaDataDecoderCodec to never create a decoder on the main thread, which could cause deadlocks under some circumstances.
Differential Revision: https://phabricator.services.mozilla.com/D96364
The new infrastructure consists of a separate bridge between the content and the
parent process and a separate local storage database in the parent process.
The new infrastructure can be used for storing and sharing of private browsing
data across content processes.
This patch only creates necessary infrastructure, actual enabling of storing and
sharing of data across content processes will be done in a follow-up patch.
Differential Revision: https://phabricator.services.mozilla.com/D96562
When running as a "universal" build, use an x64 GMP child process if the CDM library is an x64 binary.
Use ifdefs extensively to reduce risk to Intel builds if the fix needs to be uplifted.
Requires a server-side balrog change to serve an Intel Widevine binary to ARM browser versions.
Differential Revision: https://phabricator.services.mozilla.com/D96288
Since python creates little-endian utf-16 consistently whether
cross-compiling from Linux or compiling natively on macOS, we could
write a small script that essentially replaces iconv. On the other hand,
we're also doing some manual preprocessing on the InfoPlist.strings.in
files, and we might as well use the preprocessor for that.
So, we augment the preprocessor to allow an explicit output encoding
other than utf-8, and use the preprocessor instead of `sed | iconv`.
Differential Revision: https://phabricator.services.mozilla.com/D96013
I need this for some changes I want to make to Win32 file pickers.
* We add an event-driven variant to `mscom::AsyncInvoker`. When the async call
invokes `ISynchronize::Signal`, we post an event to the specified event
target (or implicitly to the main thread).
* For this to work, the async call needs to temporarily retain a reference to
itself, otherwise the async call object is destroyed and the in-flight call
is cancelled. This reference is stored in an "outer runnable" which is
responsible for executing the inner completion runnable, and then dropping
the self-reference.
* We only run the completion runnable upon *successful* initiation of the async
call. If there was a failure, we return that code immediately to the caller.
Failures also clear the reference to the completion runnable.
* If we could not obtain an async interface and must run synchronously, then
we run the completion runnable immediately after a successful synchronous
invocation.
Differential Revision: https://phabricator.services.mozilla.com/D95808
We add new DLL registration code. This is a rather generic function that
permits the following:
* Registering multiple `CLSID`s for the same DLL;
* Registering an optional `AppID`. Registering an `AppID` allows us to use a
`DllSurrogate` to host the DLL out-of-process using Windows' built-in
`dllhost.exe`. I'll be using this feature in a future bug.
* Supporting all available threading modelsl;
* Capable of registering either inproc servers or inproc handlers;
* Using the transaction-based registry API so that we can cleanly rollback
during registration if any part(s) of it fail.
Differential Revision: https://phabricator.services.mozilla.com/D95606
We add new DLL registration code. This is a rather generic function that
permits the following:
* Registering multiple `CLSID`s for the same DLL;
* Registering an optional `AppID`. Registering an `AppID` allows us to use a
`DllSurrogate` to host the DLL out-of-process using Windows' built-in
`dllhost.exe`. I'll be using this feature in a future bug.
* Supporting all available threading modelsl;
* Capable of registering either inproc servers or inproc handlers;
* Using the transaction-based registry API so that we can cleanly rollback
during registration if any part(s) of it fail.
Differential Revision: https://phabricator.services.mozilla.com/D95606
In bug 1595994 we attempted to streamline the ability to determine which decoder was available regardless of the process they would be running in. This was subsequently done via the PDMFactory.
As there are several JS API that can query which codec are supported, it requires a synchronous mechanism.
This allowed to make a determination during the PlatformDecoderModule::Supports call, depending on which process it was going to be called frome.
Having a synchronous IPC call to the RemoteDecoderManagerParent has too many caveats to be workable.
So what we do instead is first determine at launch if the required external framework are available and pass this information to each content process.
When checking if a decoder is available, we make a best guess at determining if the PDM would support such codec, without actually loading such framework when running in the content process.
Supports can no longer make a decision based on the process currently running and as such PDM::CreateAudio/VideoDecoder using an optional system framework now need to further check the validity of the CreateDecoderParam argument.
Differential Revision: https://phabricator.services.mozilla.com/D95245
Some cleanup in our smart pointer stuff: Using `move` semantics lets us avoid
needing to hack around our static analysis.
Depends on D95599
Differential Revision: https://phabricator.services.mozilla.com/D95600
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
The support code for Android, which doesn't support shm_open and can't
use the memfd backend because of issues with its SELinux policy (see bug
1670277), has been reorganized to reflect that we'll always use its own
API, ashmem, in that case.
Differential Revision: https://phabricator.services.mozilla.com/D90605
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
The support code for Android, which doesn't support shm_open and can't
use the memfd backend because of issues with its SELinux policy (see bug
1670277), has been reorganized to reflect that we'll always use its own
API, ashmem, in that case.
Differential Revision: https://phabricator.services.mozilla.com/D90605
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
And we remove unnecessary checks, BackgroundParent only run in the parent process and if e10s is on. Also RecvLauchRDDProcess will only ever be called if the rdd pref is on already.
By streamlining the call we also reduce the number of sync dispatch to 1.
Depends on D93317
Differential Revision: https://phabricator.services.mozilla.com/D93476
Add a synchronous Supports message to the IPDL PRemoteDecoderManager protocol so
decoder modules can query for playback support in the actual process that will
attempt to do the decoding.
Depends on D54878
Differential Revision: https://phabricator.services.mozilla.com/D54879
Aside from its use in AddProfilerMarker(), after initialization mPeerPid
is only used on the IO thread, so the write to it does not hold the monitor.
This means that the read in AddProfilerMarker() can cause a race, even
though we hold the monitor. This method is only called when we hold
the monitor and everything is set up, so I think we can just use
mListener->OtherPid() to get the PID.
Differential Revision: https://phabricator.services.mozilla.com/D93810
The `clobber` targets are superseded by `mach clobber`, so we don't need them for any reason. The `clean` target is meant to get you to a post-`configure` state, but it doesn't really work, and if it's necessary for you to be in that state for some reason you can just clobber and re-`configure`, so it doesn't seem worth it to get it working again. Instead, delete all of them. Also delete `everything` which is not useful when `clobber` doesn't exist.
Differential Revision: https://phabricator.services.mozilla.com/D93514
This patch does:
- Use LSWriteOptimizer
- Remove SessionStorageService since it's unused.
- Move IPC from PContent to PBackground
(by SessionStorageManager{Child, Parent} and SessionStorageCache{Child, Parent}).
- Extract SessionStorageManagerBase and add PBackgroundSessionStorageManager.
- Expose a getter function to get a BackgroundParentManager for top context id
on the parent.
IPC
- Before this patch:
- Copy from parent while loading a document.
- Mark cache entry on the parent process as loaded by the child id.
- Update change on checkpoint.
- Unmark cache entry on the parent process as unloaded for the child id while
the parent actor is destorying.
- After this patch:
- Sync IPC load in the first SessionStorage operation.
- Update change on checkpoint
`BackgroundSessionStorageManager`'s lifecycle on the parent process.
- Create by `SessionStorageManagerParent` and register to the `sManagers`.
- Hold by `SessionStorageManagerParent` and `sManagers`.
- Remove from the `sManagers` while the corresponding `BrowsingContext` is
destructed (on the parent process).
Depends on D89341
Differential Revision: https://phabricator.services.mozilla.com/D89342
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
Differential Revision: https://phabricator.services.mozilla.com/D90605
Each existing GamepadTestChannel needs to know when gamepad monitoring is
started or stopped so it knows whether it's safe to deliver messages.
Differential Revision: https://phabricator.services.mozilla.com/D86747
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
Differential Revision: https://phabricator.services.mozilla.com/D90605
There are at least 8 different methods for getting a range from an offset:
1. left word
2. right word
3. line
4. left line
5. right line
6. sentence
7. paragraph
8. range with same style.
Having a single wrapper and IPDL method for all of those with an enum would remove
a lot of redundancies.
Differential Revision: https://phabricator.services.mozilla.com/D90936
We use C++14's generic lambdas and its auto&& type in the generated code, in combination with a typed local variable to ensure the argument type is enforced.
The object is moved as necessary, no copies will occur.
The code generated will now be:
[this, self__, id__, seqno__](auto&& aParam) {
if ((!(self__))) {
NS_WARNING("Not resolving response because actor is dead.");
return;
}
bool resolve__ = true;
InitResultIPDL result = std::forward<decltype(aParam)>(aParam);
IPC::Message* reply__ = PRemoteDecoder::Reply_Decode(id__);
WriteIPDLParam(reply__, self__, resolve__);
// Sentinel = 'resolve__'
(reply__)->WriteSentinel(322044863);
WriteIPDLParam(reply__, self__, std::move(result));
// Sentinel = 'result'
(reply__)->WriteSentinel(153223840);
(reply__)->set_seqno(seqno__);
}
For multiple arguments return, creation of Tuple via Tie is also moved, though currently Tie method doesn't support move semantics.
Differential Revision: https://phabricator.services.mozilla.com/D90090
In most situations, JSONWriter users already know string lengths (either directly, or through `nsCString` and friends), so we should keep this information through JSONWriter and not recompute it again.
This also allows using JSONWriter with sub-strings (e.g., from a bigger buffer), without having to create null-terminated strings.
Public JSONWriter functions have overloads that accept literal strings.
Differential Revision: https://phabricator.services.mozilla.com/D86192
Some Android ARM64 devices appear to have a bug where sendmsg sometimes
returns 0xFFFFFFFF, which we're assuming is a -1 that was incorrectly
truncated to 32-bit and then zero-extended. This patch detects that
value (which should never legitimately be returned, because it's 16x
the maximum message size) and replaces it with -1, with some additional
assertions.
The workaround is also enabled on x86_64 Android on debug builds only,
so that the code has CI coverage.
Differential Revision: https://phabricator.services.mozilla.com/D89845
Doing this helps lower the chances of accidentally trying to send an
uninitialized primitive value, like a raw pointer or integer, over IPC due to a
sync method or IPDLParamTraits::Read implementation failing to initialize the
outparameter.
Differential Revision: https://phabricator.services.mozilla.com/D89963
With fission enabled, when we are starting a load, we might be saving
principals for a specific browsing context in process A, and then end up
targetting process B for the load, so during deserialization of the
LoadInfoArgs struct, we will end up using principals that were saved during a
previous load targetting that browsing context (with the same id) but in
process B.
Therefore, we cannot assert (without clearing the saved principals in the
original browsing context when a load is done, which can be done as a follow up
work) that the saved principal will be equal to the serialized one from
LoadInfoArgs.
Differential Revision: https://phabricator.services.mozilla.com/D89728
This patch enables sandboxed srcdoc loads to take place via DocumentChannel,
and adds mechanisms for enabling unsandboxed ones.
Both unsandboxed srcdoc, and in subsequent patches, about:blank, loads require
that the triggering principal and the principal to inherit point to the same
instance if the load takes place in the same process as where we are inheriting
those principals from. We save those principals on a target browsing context before
we load the URI, and later, when we are deserializing LoadInfoArgs into
LoadInfo in the content process, we retrieve the saved principals if the
current load identifier of the target BC matches the load identifier saved
along with the principals.
We also need to make sure that during a process switch for about:srcdoc load,
we don't use the original URI for about:srcdoc to determine the remote type and
instead we use channel's result principal.
Differential Revision: https://phabricator.services.mozilla.com/D85079
We need a sync IPC call for this because otherwise the number of smaller sync messages we would need to call would be variable.
Differential Revision: https://phabricator.services.mozilla.com/D88076
CanSend() is called (in ChannelSend()) by SendFoo() IPDL calls to prevent calling Send() on a dead actor but shmem creation uses a different code path to Send() -- one that does not use ChannelSend. This adds the guard to shmem allocation as well.
Differential Revision: https://phabricator.services.mozilla.com/D87357
posix_fallocate iterates over each page/block in a shmem to ensure the
OS allocates memory to back it. Large shmems will cause many read/write
calls to be made, and when profiling, it is very likely a SIGPROF signal
will interrupt us at sufficiently high sampling rates. Most attempts at
retrying will fail for the same reason, and this can cause the threads
to block for an indeterminate period of time.
To work around this we use the profiler's "thread sleep" mechanism to
indicate that the sampler thread should not interrupt this thread with
the sampling signal more than once.
Differential Revision: https://phabricator.services.mozilla.com/D87373
The goal of this patch is to reduce memory usage. On at least OSX, std::Queue
can use 4kb of memory even with nothing in it. This can be around 52kb of
memory per content process.
The segment size of 64 is fairly arbitrary, but these queues didn't have
more than a few hundred items in them, so it seemed like a reasonable
trade off between space for mostly-empty queues and segment overhead.
Differential Revision: https://phabricator.services.mozilla.com/D87325
This changes the duplicate checking/caching implemented by |parsed| to be
based on the name of the protocol etc. we're loading, not the file name.
This lets us detect when a protocol is being defined in two different
files.
This can happen if one file is included earlier in the resolve path than
a file explicitly specified on the command line. Any includes will resolve
to the former, and then we'll attempt to parse the latter. Before this
patch, this would result in weird errors, because there would be multiple
protocol types with the same name.
Differential Revision: https://phabricator.services.mozilla.com/D86113
This moves the IPC mechanism from PCompositorBridge to PLayerTransaction/
PWebRenderBridge, so that it can be used by content processes like the other
test APIs. It still only produces actual data for the layers backend; for
WR it will just return empty datasets.
Differential Revision: https://phabricator.services.mozilla.com/D86016