Out-of-process WebGL needs GfxInfo to exist in the composition
process (which is the GPU process if it exists and the parent process
otherwise). This patch enables the Linux version of that component in
the GPU process; the IPC currently used to give content processes copies
of the parent's GPU info is extended to also send it to the GPU process.
Differential Revision: https://phabricator.services.mozilla.com/D85443
Out-of-process WebGL needs GfxInfo to exist in the composition
process (which is the GPU process if it exists and the parent process
otherwise). This patch enables the Linux version of that component in
the GPU process; the IPC currently used to give content processes copies
of the parent's GPU info is extended to also send it to the GPU process.
Differential Revision: https://phabricator.services.mozilla.com/D85443
This patch uses IPDL's return feature to ensure that the memory
reporter manager won't wait for a report from a child process
that has already exited.
This fixes a memory reporter hang that can happen if a child process
exits during a memory report, when the parent half of the actor is
being held alive. (If the parent half of the actor is not being held
alive, then mMemoryReportRequest will be naturally cleared when it
goes away.)
This was happening frequently on Windows Fission AWSY because that test
does a minimize memory right before it attempts to get a memory report,
and the preallocated content process exits when it sees a message to
minimize memory.
Differential Revision: https://phabricator.services.mozilla.com/D85499
* Use clearer pref names.
* Default (and only support) IPDL dispatching.
* Make DispatchCommands async-only.
* Sync ipdl command per sync webgl entrypoint.
* Eat the boilerplate cost, since there's not too many.
* Run SerializedSize off same path as Serialize.
* All shmem uploads go through normal DispatchCommands.
* Defer pruning of dead code for now so we can iterate quickly.
* Use Read/Write(begin,end) instead of (begin,size).
* This would have prevented a bug where we read/wrote N*sizeof(T)*sizeof(T).
Differential Revision: https://phabricator.services.mozilla.com/D81495
This function works on all GeckoProcessTypes, not just those for child
processes.
Differential Revision: https://phabricator.services.mozilla.com/D54375
--HG--
extra : moz-landing-system : lando
This function works on all GeckoProcessTypes, not just those for child
processes.
Differential Revision: https://phabricator.services.mozilla.com/D54375
--HG--
extra : moz-landing-system : lando
In short - if a user forcibly terminates the browser because it seems
to be permanently hung, we currently do not get a change to record the
hang. This is unfortunate, because these likely represent the most
egregious hangs in terms of user frustration. This patch seeks to
address that.
If a hang exceeds 8192ms (the current definition of a "permahang" in
existing BHR terms), then we decide to immediately persist it to disk,
in case we never get a chance to return to the main thread and
submit it. On the next start of the browser, we read the file from
disk on a background thread, and just submit it using the normal
mechanism.
Regarding the handling of the file itself, I tried to do the simplest
thing I could - as far as I can tell there is no standard simple
serialization mechanism available directly to C++ in Gecko, so I just
serialized it by hand. I didn't take any special care with endianness
or anything as I can't think of a situation in which we really care
at all about these files being transferable between architectures. I
directly used PR_Write / PR_Read instead of doing something fancy
like memory mapping the file, because I don't think performance is a
critical concern here and it offers a simple protection against
reading out of bounds.
Differential Revision: https://phabricator.services.mozilla.com/D52566
--HG--
extra : moz-landing-system : lando
In short - if a user forcibly terminates the browser because it seems
to be permanently hung, we currently do not get a change to record the
hang. This is unfortunate, because these likely represent the most
egregious hangs in terms of user frustration. This patch seeks to
address that.
If a hang exceeds 8192ms (the current definition of a "permahang" in
existing BHR terms), then we decide to immediately persist it to disk,
in case we never get a chance to return to the main thread and
submit it. On the next start of the browser, we read the file from
disk on a background thread, and just submit it using the normal
mechanism.
Regarding the handling of the file itself, I tried to do the simplest
thing I could - as far as I can tell there is no standard simple
serialization mechanism available directly to C++ in Gecko, so I just
serialized it by hand. I didn't take any special care with endianness
or anything as I can't think of a situation in which we really care
at all about these files being transferable between architectures. I
directly used PR_Write / PR_Read instead of doing something fancy
like memory mapping the file, because I don't think performance is a
critical concern here and it offers a simple protection against
reading out of bounds.
Differential Revision: https://phabricator.services.mozilla.com/D52566
--HG--
extra : moz-landing-system : lando
In short - if a user forcibly terminates the browser because it seems
to be permanently hung, we currently do not get a change to record the
hang. This is unfortunate, because these likely represent the most
egregious hangs in terms of user frustration. This patch seeks to
address that.
If a hang exceeds 8192ms (the current definition of a "permahang" in
existing BHR terms), then we decide to immediately persist it to disk,
in case we never get a chance to return to the main thread and
submit it. On the next start of the browser, we read the file from
disk on a background thread, and just submit it using the normal
mechanism.
Regarding the handling of the file itself, I tried to do the simplest
thing I could - as far as I can tell there is no standard simple
serialization mechanism available directly to C++ in Gecko, so I just
serialized it by hand. I didn't take any special care with endianness
or anything as I can't think of a situation in which we really care
at all about these files being transferable between architectures. I
directly used PR_Write / PR_Read instead of doing something fancy
like memory mapping the file, because I don't think performance is a
critical concern here and it offers a simple protection against
reading out of bounds.
Differential Revision: https://phabricator.services.mozilla.com/D52566
--HG--
extra : moz-landing-system : lando
This changes the way crash reports for child processes happening too early
during the child process' startup. Before bug 1547698 we wrote a partial
.extra file with those crashes that lacked the process type. The user would
not be notified of those crashes until she restarted Firefox and even when
submitted those crashes would be erroneously labeled as browser crashes.
After bug 1547698 we stopped writing .extra files entirely for those crashes
which left orphaned .dmp files among the pending crash reports.
This patch does three things to improve the situation:
* It writes a partial .extra file so that the crashes are detected at the next
startup. So the user is still not notified directly of these crashes but she
can report them later.
* It adds the process type to the .extra file so that the crash reporters are
labelled correctly.
* It fixes a leak in the `pidToMinidump` hash-map. Since the crashes were
not finalized the `ChildProcessData` strucutre associated with them would
never be fred.
Differential Revision: https://phabricator.services.mozilla.com/D40810
--HG--
extra : moz-landing-system : lando
This requires replacing inclusions of it with inclusions of more specific prefs
files.
The exception is that StaticPrefsAll.h, which is equivalent to StaticPrefs.h,
and is used in `Codegen.py` because doing something smarter is tricky and
suitable for a follow-up. As a result, any change to StaticPrefList.yaml will
still trigger recompilation of all the generated DOM bindings files, but that's
still a big improvement over trigger recompilation of every file that uses
static prefs.
Most of the changes in this commit are very boring. The only changes that are
not boring are modules/libpref/*, Codegen.py, and ServoBindings.toml.
Differential Revision: https://phabricator.services.mozilla.com/D39138
--HG--
extra : moz-landing-system : lando
Currently it's completely unclear at use sites that the getters for `once`
static prefs return the pref value from startup, rather than the current pref
value. (Bugs have been caused by this.) This commit improves things by changing
the getter name to make it clear that the pref value obtained is from startup.
This required changing things within libpref so it distinguishes between the
"base id" (`foo_bar`) and the "full id" (`foo_bar` or
`foo_bar_DoNotUseDirectly` or `foo_bar_AtStartup` or
`foo_bar_AtStartup_DoNotUseDirectly`; the name used depends on the `mirror` and
`do_not_use_directly` values in the YAML definition.) The "full id" is used in
most places, while the "base id" is used for the `GetPrefName_*` and
`GetPrefDefault_*` functions.
(This is a nice demonstration of the benefits of the YAML file, BTW. Making
this change with the old code would have involved adding an entry to every
single pref in StaticPrefList.h.)
The patch also rejigs the comment at the top of StaticPrefList.yaml, to clarify
some things.
Differential Revision: https://phabricator.services.mozilla.com/D38604
--HG--
extra : moz-landing-system : lando
The patch also removes the dom.vr.oculus.quit.timeout pref, because it's
unused.
Differential Revision: https://phabricator.services.mozilla.com/D35973
--HG--
extra : rebase_source : bd16ed5ff0b7c2b4f8e653e9835610b25b14a39f
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
The new struct is in LayersTypes.h, all the rest of the changes are just
replacing existing uint64_t instances with the new LayersId struct.
Note that there is one functional change, in
CompositorBridgeParent::DeallocPWebRenderBridgeParent, where we now
correctly convert the PipelineId to a LayersId before using it to index
into sIndirectLayerTrees, whereas before we were incorrectly just using
the mHandle part of the PipelineId.
MozReview-Commit-ID: GFHZSZiwMrP
--HG--
extra : rebase_source : d2b274f63aaee2ee9bba030297e0a37a19af0d6c
This just adds the boilerplate that goes with the new protocol, without
adding any of the actual messages. The protocol is managed by PGPU, and
there will be one instance per compositor. The parent side lives on the
main thread of the GPU process, and the child side lives on the main
thread of the UI process. The protocol is only instantiated if the GPU
process is active.
MozReview-Commit-ID: J4VzwmEfYTa
--HG--
extra : rebase_source : 397ddda8b0e76e5ed5f63783b1220ed7b4414d99
This patch was generated automatically by the "modeline.py" script, available
here: https://github.com/amccreight/moz-source-tools/blob/master/modeline.py
For every file that is modified in this patch, the changes are as follows:
(1) The patch changes the file to use the exact C++ mode lines from the
Mozilla coding style guide, available here:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Mode_Line
(2) The patch deletes any blank lines between the mode line & the MPL
boilerplate comment.
(3) If the file previously had the mode lines and MPL boilerplate in a
single contiguous C++ comment, then the patch splits them into
separate C++ comments, to match the boilerplate in the coding style.
MozReview-Commit-ID: 77D61xpSmIl
--HG--
extra : rebase_source : c6162fa3cf539a07177a19838324bf368faa162b
BHRTelemetryService only runs in the parent process (and we can only submit
pings from there), so we need to send the data which we collect in the GPU and
Content processes over IPC to the parent process.
MozReview-Commit-ID: 8B5uZKbjNbU