The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
Previously, there was undefined behaviour when validating enum values in
EnumSerializer, which were actually invalid (which was hit during fuzzing, e.g.),
because the invalid integral value was cast to the enum type for comparison.
This patch changes the comparison to cast the valid values to their integral
values instead and compare those.
Differential Revision: https://phabricator.services.mozilla.com/D103449
The current set of cases where a nsIInputStream is directly serialized over IPC
is fairly limited, and includes cases where the IPDL struct may be directly
stored within a content process, having the nsIInputStream left unused.
By switching this serialization to DelayedStart, we can avoid performing
unnecessary streaming copies of IPC data when sending postdata streams related
to session history and document navigation, while also avoiding issues like this
coming up again due to delayed start being the default.
Differential Revision: https://phabricator.services.mozilla.com/D101804
When aDelayedStart is specified, if the created IPCStream contains an internal
IPC stream actor, an actor is used under the hood to send raw data between
processes when needed. This is usually done to reduce overhead in cases where
the target process may not use the nsIInputStream immediately, if at all.
By switching to using RemoteLazyInputStream when sending DelayedStart actors
from parent to content, the amount of data sent in the initial payload is
reduced even further, and the stream is optimized to allow sending it back to
the parent process without streaming the data through the content process again.
Normal delayed start streams are still used when sending payloads from other
processes, as RemoteLazyInputStream is only supported for nsIInputStreams
originating in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D101803
Previously, we would apply the InputStreamLengthWrapper before
DelayedStartInputStream, which meant that a stream serialized with aDelayedStart
would not correctly implement nsIInputStreamLength. By inverting the order of
these wrappers, the nsIInputStreamLength implementation is visible, without
impacting the functionality of the DelayedStartInputStream wrapper.
Differential Revision: https://phabricator.services.mozilla.com/D101802
Without the other patches in this series, this test fails with both with and
without Fission enabled, for two different reasons.
With Fission disabled, the second reload request appears as empty, due to us
being unable to rewind the postData nsIInputStream. With Fission enabled, the
second reload request causes crashes due to the nsMIMEInputStream's invariant of
requiring a seekable stream is invalidated, causing the nsICloneableInputStream
implementation to misbehave.
Differential Revision: https://phabricator.services.mozilla.com/D101800
Applying a bulk filter on accessibles in content process allows us to avoid a potentially large (and variable) number of IPC sync calls to retrieve the accessible names. I chose to implement this as a "post filter" and not to actually do the entire search in content because it would cause a lot of duplication of code for non-IPC searching, and we wouldn't have the flexibility to combine a text search with any arbitrary search key as the API requires.
I also generalized the RangeTypes.h header to PlatformExtTypes so it can be used to define filter types as well.
Differential Revision: https://phabricator.services.mozilla.com/D100730
Bug 1583109 introduced new function templates StringJoin and StringJoinAppend.
These are now used to replace several custom loops across the codebase that
implement string-joining algorithms to simplify the code.
Differential Revision: https://phabricator.services.mozilla.com/D98750
Bug 1583109 introduced new function templates StringJoin and StringJoinAppend.
These are now used to replace several custom loops across the codebase that
implement string-joining algorithms to simplify the code.
Differential Revision: https://phabricator.services.mozilla.com/D98750
Passing a union here allows us to reuse code and trim some code which was
duplicated to handle the different NodeId formats. This also consolidates the
former `NodeId` and `NodeIdData` structures into a new structure (working name
`NodeIdParts`) which represents parts that will later be converted to a string
based NodeId.
Differential Revision: https://phabricator.services.mozilla.com/D93569
This moves parts of IPCMessageUtils.h to two new header files and adapts
the include directives as necessary. The new header files are:
- EnumSerializer.h, which defines the templates for enum serializers
- IPCMessageUtilsSpecializations.h, which defines template specializations
of ParamTraits with extra dependencies (building upon both IPCMessageUtils.h
and EnumSerializer.h)
This should minimize the dependencies pulled in by every consumer of
IPCMessageUtils.h
Differential Revision: https://phabricator.services.mozilla.com/D94459
### Story
While the exisiting top browsing context is replaced, SessionStorage relies on
itself been propagated by SessionStore. However, this has been disabled in
SessionHistory in the parent process.
Therefore, we need to propagate SessionStorage data by propagating
BackgroundSessionStorageManager.
### Notes
This patch assumes that the target top-level browsing context shouldn't have
been registered to BackgroundSessionStorageManager. If this can happen, we will
either need to merge SessionStorage data into it or find a proper (earlier)
place to update sManagers.
### Test Plan
Test: D97763
Try: https://treeherder.mozilla.org/jobs?repo=try&revision=2acc7b393fb80b640f4fbe3ade1da7dd440c380e&selectedTaskRun=KK6XhR-sQuqv5lcntVLc2w.0
Differential Revision: https://phabricator.services.mozilla.com/D98082
Classes that inherit from DiscardableRunnable are only promising that it is OK
for Run() to be skipped, rather than promising that Cancel() is effective.
Differential Revision: https://phabricator.services.mozilla.com/D98117
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. Do that by transimitting the vsync rate `SendNotify()`.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Depends on D98254
Differential Revision: https://phabricator.services.mozilla.com/D93173
When using an x64 GMP child process with an arm64 parent process on arm64 Mac's, use a 16k Shmem pagesize in the child process.
Differential Revision: https://phabricator.services.mozilla.com/D98241
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. Do that by transimitting the vsync rate `SendNotify()`.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. Do that by transimitting the vsync rate `SendNotify()`.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. When using PVsync, limit updates to once in every
250ms in order to minimize overhead while still updating fast.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. When using PVsync, limit updates to once in every
250ms in order to minimize overhead while still updating fast.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
To allow `requestAnimationFrame()` and similar things to run at monitor
speed if there is only a window-specific vsyncsource available.
This is the case for Wayland and, in the future, EGL/X11. Other backends
may opt for window specific vsyncsources as well at some point.
The idea is to, instead of using global vsync objects, expose a vsyncsource
from nsWindow and use it for refresh drivers. For the content process, move
VsyncChild to BrowserChild, so for each Browserchild there is only one
VsyncChild to which all refresh drivers connect.
IPC in managed either by PBrowser or PBackground. Right now, PBrowser is
only used on Wayland, as both PBrowser and the Wayland vsyncsource run
on the main thread. Other backends keep using the background thread for
now.
While at it, make it so that we constantly update the refresh rate. This
is necessary for Wayland, but also on other platforms variable refresh rates
are increasingly common. When using PVsync, limit updates to once in every
250ms in order to minimize overhead while still updating fast.
How to test:
- run the Wayland backend
- enable `widget.wayland_vsync.enabled`
- optionally: disable `privacy.reduceTimerPrecision`
- run `vsynctester.com` or `testufo.com`
Expected results:
Instead of fixed 60Hz, things should update at monitor refresh rate -
e.g. 144Hz
Original patch by Kenny Levinsen.
Differential Revision: https://phabricator.services.mozilla.com/D93173
TimeStamps in markers must now be streamed through `SpliceableJSONWriter::TimeProperty(name, timestamp)`.
This is consistent with all other JSON-writing functions being in `SpliceableJSONWriter` (and base class `JSONWriter`).
Depends on D97556
Differential Revision: https://phabricator.services.mozilla.com/D97557
Check `profiler_is_locked_on_current_thread()` before recording an IPC marker.
This removes the deadlock found in bug 1675406:
- SamplerThread: During sampling, some data is recorded into the profile buffer, which locks ProfileChunkedBuffer::mMutex, this triggers some chunk updates that are sent to ProfileBufferGlobalController, which attempts to lock its mutex.
- Main thread: An IPC with an update arrives, ProfileBufferGlobalController locks its mutex, then it sends an IPC out, this records a marker into ProfileChunkedBuffer, which attempts to lock its mMutex.
With this patch and bug 1671403, that last IPC will not record a marker anymore.
Differential Revision: https://phabricator.services.mozilla.com/D96971
* We add a new function to `mscom/Utils.h`: `DiagnosticNameForIID`. Its
purpose is to generate a friendly name for an interface, given an IID.
For special interfaces internal to COM, we add our own descriptive strings.
If the interface does not have a description, we simply convert the IID
to string format using GUIDToString.
* We modify `ProfilerMarkerChannelHook` to include the additional diagnostic
information for IIDs in its markers.
* Since each marker is now differentiated by IID, we remove the restriction
that we only use markers for the outermost COM call. In particular, this
assumption doesn't hold for asynchronous COM calls, so we would be losing
data in the case where an async call was pending while the main thread was
still making COM calls on other interfaces.
* There isn't really an effective way to distinguish between sync and
async calls at the channel hook layer. I'm thinking about how we could
perhaps modify `AsyncInvoker` to help mark these, but it's a bit messy.
I'm going to postpone that to future work.
* Other potential future work is expanding the number of interfaces for which
we have frendly names. I could see us annotating our various COM interfaces
in a way that we could automagically generate human-readable descriptions for
those interfaces.
Differential Revision: https://phabricator.services.mozilla.com/D97042
I realized that calling `mscom::IsProxy` is kind of redundant when we already
need to query for `ICallFactory`.
We still allow `aIsProxy` as an optional constructor argument, but if not
present then we go straight to `QueryInterface(IID_ICallFactory)`.
Differential Revision: https://phabricator.services.mozilla.com/D97047
We can now chaincreation of the decoder to the launch of the RDD process as needed and setting up the PRemoteDecoderManager
Differential Revision: https://phabricator.services.mozilla.com/D96365
PDMFactory::CreateDecoder is changed to return a MozPromise that will contain the MediaDataDecoder once created.
This will allow to later make RemoteDecoderManager fully asynchronous and no longer require an IPC sync call to start the RDD process.
We also modify the WebrtcMediaDataDecoderCodec to never create a decoder on the main thread, which could cause deadlocks under some circumstances.
Differential Revision: https://phabricator.services.mozilla.com/D96364
We can now chaincreation of the decoder to the launch of the RDD process as needed and setting up the PRemoteDecoderManager
Differential Revision: https://phabricator.services.mozilla.com/D96365
PDMFactory::CreateDecoder is changed to return a MozPromise that will contain the MediaDataDecoder once created.
This will allow to later make RemoteDecoderManager fully asynchronous and no longer require an IPC sync call to start the RDD process.
We also modify the WebrtcMediaDataDecoderCodec to never create a decoder on the main thread, which could cause deadlocks under some circumstances.
Differential Revision: https://phabricator.services.mozilla.com/D96364
The new infrastructure consists of a separate bridge between the content and the
parent process and a separate local storage database in the parent process.
The new infrastructure can be used for storing and sharing of private browsing
data across content processes.
This patch only creates necessary infrastructure, actual enabling of storing and
sharing of data across content processes will be done in a follow-up patch.
Differential Revision: https://phabricator.services.mozilla.com/D96562
When running as a "universal" build, use an x64 GMP child process if the CDM library is an x64 binary.
Use ifdefs extensively to reduce risk to Intel builds if the fix needs to be uplifted.
Requires a server-side balrog change to serve an Intel Widevine binary to ARM browser versions.
Differential Revision: https://phabricator.services.mozilla.com/D96288
Since python creates little-endian utf-16 consistently whether
cross-compiling from Linux or compiling natively on macOS, we could
write a small script that essentially replaces iconv. On the other hand,
we're also doing some manual preprocessing on the InfoPlist.strings.in
files, and we might as well use the preprocessor for that.
So, we augment the preprocessor to allow an explicit output encoding
other than utf-8, and use the preprocessor instead of `sed | iconv`.
Differential Revision: https://phabricator.services.mozilla.com/D96013
I need this for some changes I want to make to Win32 file pickers.
* We add an event-driven variant to `mscom::AsyncInvoker`. When the async call
invokes `ISynchronize::Signal`, we post an event to the specified event
target (or implicitly to the main thread).
* For this to work, the async call needs to temporarily retain a reference to
itself, otherwise the async call object is destroyed and the in-flight call
is cancelled. This reference is stored in an "outer runnable" which is
responsible for executing the inner completion runnable, and then dropping
the self-reference.
* We only run the completion runnable upon *successful* initiation of the async
call. If there was a failure, we return that code immediately to the caller.
Failures also clear the reference to the completion runnable.
* If we could not obtain an async interface and must run synchronously, then
we run the completion runnable immediately after a successful synchronous
invocation.
Differential Revision: https://phabricator.services.mozilla.com/D95808
We add new DLL registration code. This is a rather generic function that
permits the following:
* Registering multiple `CLSID`s for the same DLL;
* Registering an optional `AppID`. Registering an `AppID` allows us to use a
`DllSurrogate` to host the DLL out-of-process using Windows' built-in
`dllhost.exe`. I'll be using this feature in a future bug.
* Supporting all available threading modelsl;
* Capable of registering either inproc servers or inproc handlers;
* Using the transaction-based registry API so that we can cleanly rollback
during registration if any part(s) of it fail.
Differential Revision: https://phabricator.services.mozilla.com/D95606
We add new DLL registration code. This is a rather generic function that
permits the following:
* Registering multiple `CLSID`s for the same DLL;
* Registering an optional `AppID`. Registering an `AppID` allows us to use a
`DllSurrogate` to host the DLL out-of-process using Windows' built-in
`dllhost.exe`. I'll be using this feature in a future bug.
* Supporting all available threading modelsl;
* Capable of registering either inproc servers or inproc handlers;
* Using the transaction-based registry API so that we can cleanly rollback
during registration if any part(s) of it fail.
Differential Revision: https://phabricator.services.mozilla.com/D95606
In bug 1595994 we attempted to streamline the ability to determine which decoder was available regardless of the process they would be running in. This was subsequently done via the PDMFactory.
As there are several JS API that can query which codec are supported, it requires a synchronous mechanism.
This allowed to make a determination during the PlatformDecoderModule::Supports call, depending on which process it was going to be called frome.
Having a synchronous IPC call to the RemoteDecoderManagerParent has too many caveats to be workable.
So what we do instead is first determine at launch if the required external framework are available and pass this information to each content process.
When checking if a decoder is available, we make a best guess at determining if the PDM would support such codec, without actually loading such framework when running in the content process.
Supports can no longer make a decision based on the process currently running and as such PDM::CreateAudio/VideoDecoder using an optional system framework now need to further check the validity of the CreateDecoderParam argument.
Differential Revision: https://phabricator.services.mozilla.com/D95245
Some cleanup in our smart pointer stuff: Using `move` semantics lets us avoid
needing to hack around our static analysis.
Depends on D95599
Differential Revision: https://phabricator.services.mozilla.com/D95600
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
The support code for Android, which doesn't support shm_open and can't
use the memfd backend because of issues with its SELinux policy (see bug
1670277), has been reorganized to reflect that we'll always use its own
API, ashmem, in that case.
Differential Revision: https://phabricator.services.mozilla.com/D90605
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
The support code for Android, which doesn't support shm_open and can't
use the memfd backend because of issues with its SELinux policy (see bug
1670277), has been reorganized to reflect that we'll always use its own
API, ashmem, in that case.
Differential Revision: https://phabricator.services.mozilla.com/D90605
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
And we remove unnecessary checks, BackgroundParent only run in the parent process and if e10s is on. Also RecvLauchRDDProcess will only ever be called if the rdd pref is on already.
By streamlining the call we also reduce the number of sync dispatch to 1.
Depends on D93317
Differential Revision: https://phabricator.services.mozilla.com/D93476
Add a synchronous Supports message to the IPDL PRemoteDecoderManager protocol so
decoder modules can query for playback support in the actual process that will
attempt to do the decoding.
Depends on D54878
Differential Revision: https://phabricator.services.mozilla.com/D54879
Aside from its use in AddProfilerMarker(), after initialization mPeerPid
is only used on the IO thread, so the write to it does not hold the monitor.
This means that the read in AddProfilerMarker() can cause a race, even
though we hold the monitor. This method is only called when we hold
the monitor and everything is set up, so I think we can just use
mListener->OtherPid() to get the PID.
Differential Revision: https://phabricator.services.mozilla.com/D93810
The `clobber` targets are superseded by `mach clobber`, so we don't need them for any reason. The `clean` target is meant to get you to a post-`configure` state, but it doesn't really work, and if it's necessary for you to be in that state for some reason you can just clobber and re-`configure`, so it doesn't seem worth it to get it working again. Instead, delete all of them. Also delete `everything` which is not useful when `clobber` doesn't exist.
Differential Revision: https://phabricator.services.mozilla.com/D93514
This patch does:
- Use LSWriteOptimizer
- Remove SessionStorageService since it's unused.
- Move IPC from PContent to PBackground
(by SessionStorageManager{Child, Parent} and SessionStorageCache{Child, Parent}).
- Extract SessionStorageManagerBase and add PBackgroundSessionStorageManager.
- Expose a getter function to get a BackgroundParentManager for top context id
on the parent.
IPC
- Before this patch:
- Copy from parent while loading a document.
- Mark cache entry on the parent process as loaded by the child id.
- Update change on checkpoint.
- Unmark cache entry on the parent process as unloaded for the child id while
the parent actor is destorying.
- After this patch:
- Sync IPC load in the first SessionStorage operation.
- Update change on checkpoint
`BackgroundSessionStorageManager`'s lifecycle on the parent process.
- Create by `SessionStorageManagerParent` and register to the `sManagers`.
- Hold by `SessionStorageManagerParent` and `sManagers`.
- Remove from the `sManagers` while the corresponding `BrowsingContext` is
destructed (on the parent process).
Depends on D89341
Differential Revision: https://phabricator.services.mozilla.com/D89342
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
Differential Revision: https://phabricator.services.mozilla.com/D90605
Each existing GamepadTestChannel needs to know when gamepad monitoring is
started or stopped so it knows whether it's safe to deliver messages.
Differential Revision: https://phabricator.services.mozilla.com/D86747
This commit also allows `memfd_create` in the seccomp-bpf policy for all
process types.
`memfd_create` is an API added in Linux 3.17 (and adopted by FreeBSD
for the upcoming version 13) for creating anonymous shared memory
not connected to any filesystem. Supporting it means that sandboxed
child processes on Linux can create shared memory directly instead of
messaging a broker, which is unavoidably slower, and it should avoid
the problems we'd been seeing with overly small `/dev/shm` in container
environments (which were causing serious problems for using Firefox for
automated testing of frontend projects).
`memfd_create` also introduces the related operation of file seals:
irrevocably preventing types of modifications to a file. Unfortunately,
the most useful one, `F_SEAL_WRITE`, can't be relied on; see the large
comment in `SharedMemory:ReadOnlyCopy` for details. So we still use
the applicable seals as defense in depth, but read-only copies are
implemented on Linux by using procfs (and see the comments on the
`ReadOnlyCopy` function in `shared_memory_posix.cc` for the subtleties
there).
There's also a FreeBSD implementation, using `cap_rights_limit` for
read-only copies, if the build host is new enough to have the
`memfd_create` function.
Differential Revision: https://phabricator.services.mozilla.com/D90605
There are at least 8 different methods for getting a range from an offset:
1. left word
2. right word
3. line
4. left line
5. right line
6. sentence
7. paragraph
8. range with same style.
Having a single wrapper and IPDL method for all of those with an enum would remove
a lot of redundancies.
Differential Revision: https://phabricator.services.mozilla.com/D90936
We use C++14's generic lambdas and its auto&& type in the generated code, in combination with a typed local variable to ensure the argument type is enforced.
The object is moved as necessary, no copies will occur.
The code generated will now be:
[this, self__, id__, seqno__](auto&& aParam) {
if ((!(self__))) {
NS_WARNING("Not resolving response because actor is dead.");
return;
}
bool resolve__ = true;
InitResultIPDL result = std::forward<decltype(aParam)>(aParam);
IPC::Message* reply__ = PRemoteDecoder::Reply_Decode(id__);
WriteIPDLParam(reply__, self__, resolve__);
// Sentinel = 'resolve__'
(reply__)->WriteSentinel(322044863);
WriteIPDLParam(reply__, self__, std::move(result));
// Sentinel = 'result'
(reply__)->WriteSentinel(153223840);
(reply__)->set_seqno(seqno__);
}
For multiple arguments return, creation of Tuple via Tie is also moved, though currently Tie method doesn't support move semantics.
Differential Revision: https://phabricator.services.mozilla.com/D90090
In most situations, JSONWriter users already know string lengths (either directly, or through `nsCString` and friends), so we should keep this information through JSONWriter and not recompute it again.
This also allows using JSONWriter with sub-strings (e.g., from a bigger buffer), without having to create null-terminated strings.
Public JSONWriter functions have overloads that accept literal strings.
Differential Revision: https://phabricator.services.mozilla.com/D86192
Some Android ARM64 devices appear to have a bug where sendmsg sometimes
returns 0xFFFFFFFF, which we're assuming is a -1 that was incorrectly
truncated to 32-bit and then zero-extended. This patch detects that
value (which should never legitimately be returned, because it's 16x
the maximum message size) and replaces it with -1, with some additional
assertions.
The workaround is also enabled on x86_64 Android on debug builds only,
so that the code has CI coverage.
Differential Revision: https://phabricator.services.mozilla.com/D89845
Doing this helps lower the chances of accidentally trying to send an
uninitialized primitive value, like a raw pointer or integer, over IPC due to a
sync method or IPDLParamTraits::Read implementation failing to initialize the
outparameter.
Differential Revision: https://phabricator.services.mozilla.com/D89963
With fission enabled, when we are starting a load, we might be saving
principals for a specific browsing context in process A, and then end up
targetting process B for the load, so during deserialization of the
LoadInfoArgs struct, we will end up using principals that were saved during a
previous load targetting that browsing context (with the same id) but in
process B.
Therefore, we cannot assert (without clearing the saved principals in the
original browsing context when a load is done, which can be done as a follow up
work) that the saved principal will be equal to the serialized one from
LoadInfoArgs.
Differential Revision: https://phabricator.services.mozilla.com/D89728
This patch enables sandboxed srcdoc loads to take place via DocumentChannel,
and adds mechanisms for enabling unsandboxed ones.
Both unsandboxed srcdoc, and in subsequent patches, about:blank, loads require
that the triggering principal and the principal to inherit point to the same
instance if the load takes place in the same process as where we are inheriting
those principals from. We save those principals on a target browsing context before
we load the URI, and later, when we are deserializing LoadInfoArgs into
LoadInfo in the content process, we retrieve the saved principals if the
current load identifier of the target BC matches the load identifier saved
along with the principals.
We also need to make sure that during a process switch for about:srcdoc load,
we don't use the original URI for about:srcdoc to determine the remote type and
instead we use channel's result principal.
Differential Revision: https://phabricator.services.mozilla.com/D85079
We need a sync IPC call for this because otherwise the number of smaller sync messages we would need to call would be variable.
Differential Revision: https://phabricator.services.mozilla.com/D88076
CanSend() is called (in ChannelSend()) by SendFoo() IPDL calls to prevent calling Send() on a dead actor but shmem creation uses a different code path to Send() -- one that does not use ChannelSend. This adds the guard to shmem allocation as well.
Differential Revision: https://phabricator.services.mozilla.com/D87357
posix_fallocate iterates over each page/block in a shmem to ensure the
OS allocates memory to back it. Large shmems will cause many read/write
calls to be made, and when profiling, it is very likely a SIGPROF signal
will interrupt us at sufficiently high sampling rates. Most attempts at
retrying will fail for the same reason, and this can cause the threads
to block for an indeterminate period of time.
To work around this we use the profiler's "thread sleep" mechanism to
indicate that the sampler thread should not interrupt this thread with
the sampling signal more than once.
Differential Revision: https://phabricator.services.mozilla.com/D87373
The goal of this patch is to reduce memory usage. On at least OSX, std::Queue
can use 4kb of memory even with nothing in it. This can be around 52kb of
memory per content process.
The segment size of 64 is fairly arbitrary, but these queues didn't have
more than a few hundred items in them, so it seemed like a reasonable
trade off between space for mostly-empty queues and segment overhead.
Differential Revision: https://phabricator.services.mozilla.com/D87325
This changes the duplicate checking/caching implemented by |parsed| to be
based on the name of the protocol etc. we're loading, not the file name.
This lets us detect when a protocol is being defined in two different
files.
This can happen if one file is included earlier in the resolve path than
a file explicitly specified on the command line. Any includes will resolve
to the former, and then we'll attempt to parse the latter. Before this
patch, this would result in weird errors, because there would be multiple
protocol types with the same name.
Differential Revision: https://phabricator.services.mozilla.com/D86113
This moves the IPC mechanism from PCompositorBridge to PLayerTransaction/
PWebRenderBridge, so that it can be used by content processes like the other
test APIs. It still only produces actual data for the layers backend; for
WR it will just return empty datasets.
Differential Revision: https://phabricator.services.mozilla.com/D86016
This patch enables sandboxed srcdoc loads to take place via DocumentChannel,
and adds mechanisms for enabling unsandboxed ones.
Both unsandboxed srcdoc, and in subsequent patches, about:blank, loads require
that the triggering principal and the principal to inherit point to the same
instance if the load takes place in the same process as where we are inheriting
those principals from. We save those principals on a target browsing context before
we load the URI, and later, when we are deserializing LoadInfoArgs into
LoadInfo in the content process, we retrieve the saved principals if the
current load identifier of the target BC matches the load identifier saved
along with the principals.
We also need to make sure that during a process switch for about:srcdoc load,
we don't use the original URI for about:srcdoc to determine the remote type and
instead we use channel's result principal.
Differential Revision: https://phabricator.services.mozilla.com/D85079
Change the GamepadEventChannel so it is fully-initialized by the IPC
constuctor and needs no separate "init" message, and so its completely
destroyed by the ActorDestroy() message so it needs no "cleanup" message.
This simplifies the object lifetime, as well as unifies the IPC error vs
clean shutdown paths.
Differential Revision: https://phabricator.services.mozilla.com/D85481
The main reason of this patch is that the source stream will be closed when `InputStreamHelper::SerializeInputStreamAsPipe` is called, but when `InputStreamHelper::SerializeInputStreamAsPipe` is not called, the stream is stayed opened. I think we should make the behavior of serialization stream consistent, which is keeping the source stream opened.
Differential Revision: https://phabricator.services.mozilla.com/D81978
Change the GamepadEventChannel so it is fully-initialized by the IPC
constuctor and needs no separate "init" message, and so its completely
destroyed by the ActorDestroy() message so it needs no "cleanup" message.
This simplifies the object lifetime, as well as unifies the IPC error vs
clean shutdown paths.
Differential Revision: https://phabricator.services.mozilla.com/D85481
CLOSED TREE
We don't need these macros anymore, for two reasons:
1. We have static analysis to provide the same sort of checks via `MOZ_RAII`
and friends.
2. clang now warns for the "temporary that should have been a declaration" case.
The extra requirements on class construction also show up during debug tests
as performance problems.
This change was automated by using the following sed script:
```
# Remove declarations in classes.
/MOZ_DECL_USE_GUARD_OBJECT_NOTIFIER/d
/MOZ_GUARD_OBJECT_NOTIFIER_INIT/d
# Remove individual macros, carefully.
{
# We don't have to worry about substrings here because the closing
# parenthesis "anchors" the match.
s/MOZ_GUARD_OBJECT_NOTIFIER_PARAM)/)/g;
s/MOZ_GUARD_OBJECT_NOTIFIER_PARAM_TO_PARENT)/)/g;
s/MOZ_GUARD_OBJECT_NOTIFIER_PARAM_IN_IMPL)/)/g;
s/MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM_IN_IMPL)/)/g;
# Remove the longer identifier first.
s/MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM_TO_PARENT//g;
s/MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM//g;
}
# Remove the actual include.
\@# *include "mozilla/GuardObjects.h"@d
```
and running:
```
find . -name \*.cpp -o -name \*.h | grep -v 'GuardObjects.h' |xargs sed -i -f script 2>/dev/null
mach clang-format
```
Differential Revision: https://phabricator.services.mozilla.com/D85168
We don't need these macros anymore, for two reasons:
1. We have static analysis to provide the same sort of checks via `MOZ_RAII`
and friends.
2. clang now warns for the "temporary that should have been a declaration" case.
The extra requirements on class construction also show up during debug tests
as performance problems.
This change was automated by using the following sed script:
```
# Remove declarations in classes.
/MOZ_DECL_USE_GUARD_OBJECT_NOTIFIER/d
/MOZ_GUARD_OBJECT_NOTIFIER_INIT/d
# Remove individual macros, carefully.
{
# We don't have to worry about substrings here because the closing
# parenthesis "anchors" the match.
s/MOZ_GUARD_OBJECT_NOTIFIER_PARAM)/)/g;
s/MOZ_GUARD_OBJECT_NOTIFIER_PARAM_TO_PARENT)/)/g;
s/MOZ_GUARD_OBJECT_NOTIFIER_PARAM_IN_IMPL)/)/g;
s/MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM_IN_IMPL)/)/g;
# Remove the longer identifier first.
s/MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM_TO_PARENT//g;
s/MOZ_GUARD_OBJECT_NOTIFIER_ONLY_PARAM//g;
}
# Remove the actual include.
\@# *include "mozilla/GuardObjects.h"@d
```
and running:
```
find . -name \*.cpp -o -name \*.h | grep -v 'GuardObjects.h' |xargs sed -i -f script 2>/dev/null
mach clang-format
```
Differential Revision: https://phabricator.services.mozilla.com/D85168
Previously, we only did this when IsCallerExternalProcess() returned false.
There are three reasons for changing this:
1. There seem to be cases where IsCallerExternalProcess() returns true even when marshaling for a COM query in the MTA.
2. After bug 1627084, we pre-build a11y handler payloads on the main thread for bulk fetch calls. That will marshal interceptors. However, IsCallerExternalProcess() can't work in that case because it's not running on the thread on which the COM call is being handled.
3. If MSHLFLAGS_NOPING is used, Release calls from remote clients are never sent to the server. So, as soon as we use NOPING for our parent process, we're already going to leak references, even if we don't use NOPING for external callers. Put another way, as soon as we use NOPING for one caller, we may as well use it for all callers because COM pinging will never release the object anyway.
Differential Revision: https://phabricator.services.mozilla.com/D84778
An actual use of the `BACKGROUND_X11` thread may have briefly existed in
2009 but probably never shipped; it's been unused since then, and even
the comments mentioning it have been pruned (bug 909028, bug 624422).
The other thread types besides IO have been commented out ever since
they were first committed.
This patch also removes `x11_util.h`, which has been unused since 2017
(bug 1426284); earlier in history, it had one of those comments
mentioning the nonexistent X11 thread.
Differential Revision: https://phabricator.services.mozilla.com/D85346
It could go into its own test suite, but it 1) depends on `mozbuild` code, so the `mozbuild` suite as well as this new suite would be running on any push that touches `mozbuild` code anyway, and 2) this is code that runs during the build, so it's not out of place.
Differential Revision: https://phabricator.services.mozilla.com/D84547
Having two classes in the inheritance chain inherit from SupportsWeakPtr
now won't compile, but you can use WeakPtr<Derived> when any base class
inherits from SupportsWeakPtr.
Differential Revision: https://phabricator.services.mozilla.com/D83674
The Chromium-derived IPC code was, as far as I can tell, originally
designed for Windows and assumed that channels would be named pipes,
managed and connected to via `std::wstring` paths. The port to Unix,
however, used unnamed `socketpair()` and passed them directly from
process to process, so it has no use for these channel IDs... but it
still computes and propagates them, even though they're not used, using
deprecated wide-string APIs.
This patch introduces a typedef for an abstract channel ID, which is a
`wstring` on Windows and an empty struct on Unix, to allow removing the
string code where it's not needed without needing to completely redesign
the channel abstraction.
Differential Revision: https://phabricator.services.mozilla.com/D72260
Chromium's fix for CVE-2011-3079 added an optional prefix parameter for
channel IDs, but we've never used it and have no plans to. (Chromium
itself doesn't appear to have used it except with the prefixes "gpu"
and "nacl", and the code has since been removed completely in favor of
Mojo.) So let's simplify things and remove it.
Differential Revision: https://phabricator.services.mozilla.com/D84276
This "create a pipe" operation has a mode where, on Unix, it doesn't
create a new transport but rather uses a hard-coded fd for the initial
IPC channel in a child process. (It was originally written for Windows
and the assumption of using named pipes and pathnames for everything.)
That seems like a footgun, so this patch checks for trying to "create"
that pipe twice. However, it doesn't check for accidentally calling it
in the parent process.
Differential Revision: https://phabricator.services.mozilla.com/D72259
The PipeMap class tries to simulate the Windows channel model (named
pipes that the client opens by a pathname) on Unix. However, it's
effectively dead code -- the map is empty except in some unit tests that
we never imported.
What we do is generate a "channel ID" with string formatting, then don't
pass it to the child or ever insert anything into the map, then the child
looks up an empty string and doesn't find it, so it uses the hard-coded
fixed fd for the initial channel.
Basically, it does nothing except maybe confuse unfamiliar readers, so
let's get rid of it.
Differential Revision: https://phabricator.services.mozilla.com/D72258