The browsing context group id needs to be an 64-bit integer. Otherwise
when having a browsing context created by a child process, if it goes
over INT32_MAX, parsing fails, and we end up creating a new BCG, which
can end up in the wrong process, etc.
Differential Revision: https://phabricator.services.mozilla.com/D92803
Pressing Enter with the initial focus of the print destination select element
will now start the print.
Move a couple :focus rules to use :-moz-focusring for consistency in the print UI.
Differential Revision: https://phabricator.services.mozilla.com/D91839
Since the test code will be waiting on a promise from the addGamepad message,
we can simply defer fulfilling the promise until monitoring starts if it
hasn't already.
Since every other message relies on the index given by the fulfilled promise,
it ensures that we won't get any simulated events until monitoring starts
Differential Revision: https://phabricator.services.mozilla.com/D86748
Each existing GamepadTestChannel needs to know when gamepad monitoring is
started or stopped so it knows whether it's safe to deliver messages.
Differential Revision: https://phabricator.services.mozilla.com/D86747
This patch is an update of Benjamin Bouvier's WIP patch that supports
Cranelift's new Wasm validator functionality, as introduced in
bytecodealliance/wasmtime#2059.
Previously, Baldrdash allowed SpiderMonkey to validate Wasm function
bodies before invoking Cranelift. The Cranelift PR now causes validation
to happen in the function-body compiler. Because the Cranelift backend
now takes on this responsibility, we no longer need to invoke the
SpiderMonkey function-body validator. This requires some significant new
glue in the Baldrdash bindings to allow Cranelift's Wasm frontend to
access module type information.
Co-authored-by: Benjamin Bouvier <benj@benj.me>
Differential Revision: https://phabricator.services.mozilla.com/D92504
This patch pulls in an updated Cranelift with a new validation strategy,
introduced by bytecodealliance/wasmtime#2059. This new design validates
the Wasm module as it parses the function bodies. A subsequent patch
will adapt Baldrdash to work with this.
Differential Revision: https://phabricator.services.mozilla.com/D92503
This was more difficult to solve than I expected. The main issue is that the
`[open]` attribute on `#urlbar` wasn't accurate when the view was "open" but
there were no results or one-offs, so it was in fact closed. This broke a few
style rules.
This bug can also be reached when the user has no engines and clears the search
string while in search mode. This includes pressing Accel+K when typing a search
string while not in search mode.
The relationship between the UrlbarView and the one-offs is complex and depends
on a lot of listeners and async calls made in synchronous contexts. Furthermore,
most of the UrlbarView open/close code is synchronous, but checking the number
of engines (to determine if the one-offs will open) is an async operation. These
factors make it difficult for the view to discern any state about the one-offs
and plan accordingly.
First I tried adding a [oneoff] attribute to .urlbarView, set asynchronously when
the one-offs are built. Then I changed CSS rules to check
```
.urlbarView:not([noresults]),
.urlbarView[oneoffs]
```
instead of just `#urlbar[open]`. This approach would've required changing
a bunch of CSS from the simple `#urlbar[open]` to the more complicated CSS
above, which was not good for code readability. Also it would keep `[open]` in
a weird useless state where it couldn't really be trusted. This would've caused
other styling problems.
I settled on adding a `.then` call around the affected UrlbarView opening. The
view opens only if we are sure that we will have either results or one-offs,
so we can once again trust the `[open]` parameter. This approach has its
drawbacks. Mainly, it's a more JavaScript-heavy solution so it's possibly
slow. Thankfully, it's something that can be easily cached.
Differential Revision: https://phabricator.services.mozilla.com/D92526
When xorigin is enabled, test url is amended with extra query parameters, as a way to send extra information via test runner, which this test does not expect. When we are comparing test url with the url of the iframe it embeds, we need to remove extra query parameters.
Differential Revision: https://phabricator.services.mozilla.com/D92771
Excerpting some documentation I wrote up for an OVERVIEW.md for Service
Workers for this exact use situation below. The basic situation of this patch
is that we were trying to create the FetchEventOpProxyParent immediately
before the managing PRemoteWorker instance existed, which isn't a thing we can
do. Because FetchEvent ops are more complicated they require somewhat more
unique handling, but it should have been unified with the PendingOp
infrastructure and was not. The one notable thing going on actor-wise is that
the request body (if present) requires special RemoteLazyInputStream
serialization and this is something that can only be done once we have the
RemoteWorkerParent. See https://phabricator.services.mozilla.com/D73173 and
its commit message and my "Restating" blocks for more context.
### Threads and Proxies
#### Main Thread
ServiceWorkerManager's authoritative ServiceWorker state lives on the main
thread in the parent process. This is due to a combination of legacy factors
and that currently the nsHttpChannel AsyncOpen logic where navigation
interception occurs must be on the main thread.
#### IPDL Background Thread
The IPDL Background Thread is the thread where PBackground parent actors are
created. Because IPDL actors are explicitly tied to the thread they are created
on and PBackground is the only top-level protocol created for use by DOM
Workers, this thread is the natural home for book-keeping and authoritative
state for APIs that are accessed via PBackground-managed protocols. For
example, IndexedDB and other QuotaManager-managed storage APIs.
The Remote Worker APIs are all PBackground managed and so all messages sent via
the Remote Worker API need to be sent from the IPDL Background thread.
#### Main Thread to IPDL Background Proxies
There are 2 ways to get data from the main thread to the IPDL Background thread:
either via direct runnable dispatch or via IPDL IPC. We use IPDL IPC (which
has optimizations for same-process communication).
The following interfaces exist exclusively to proxy requests from the
ServiceWorkerManager on the main thread to the IPDL Background thread.
- `PRemoteWorkerController` is a proxy wrapper around the
`RemoteWorkerController` API exposed on the IPDL Background thread.
- `PFetchEventOp` is paired with `PFetchEventOpProxy` managed by the above
`PRemoteWorkerController`. `PFetchEventOp` gets the data to the IPDL
Background thread from the main thread, then `PFetchEventOpProxy` gets the
data down to the content process.
## Non-Fetch ServiceWorker events AKA ExtendableEvents
How non-fetch events are dispatched to the serviceworker (including the IPC).
Because ServiceWorkers are intended to be shutdown and restarted on demand and
most event processing is asynchronous, there needs to be a way to track
outstanding requests and for content logic to indicate when it is done
processing requests. `ExtendableEvent`s are the mechanism for this. A method
`waitUntil(promise)` adds promises to be track as long as the event is still
"active".
This straightforward lifecycle means that these events map well to our IPC
async return value mechanism. This is used by
`PRemoteWorker::ExecServiceWorkerOp`.
## Fetch events and FetchEvent.respondWith()
`FetchEvent`s have a different lifecycle and dataflow than regular
`ExtendableEvents`. They expose a `respondWith()` method that will eventually
resolve with a fetch `Response` object that potentially needs to be propagated
before the ExtendableEvent is no longer active. (The response will never be
propagated after `waitUntil()` settles because every call to `respondWith()`
implicitly calls `waitUntil()`.)
From an IPC perspective, this means that `FetchEvent` instances need their own
IPC actor rather than being able to depend on the async return value mechanism
of IPDL. That's why `PFetchEventOpProxy` exists and is managed by
`PRemoteWorker`.
Differential Revision: https://phabricator.services.mozilla.com/D92021
This changeset focuses on dependencies, but surprisingly, it has an impact on bundles.
Looks like one dependency uses another revision.
Differential Revision: https://phabricator.services.mozilla.com/D92736
This changeset focus on devDependencies, but a similar work can probably be done on dependencies.
Many dependencies are justifies solely because of jest.
In order to run flow, we only need flow, and we need very few things for bin/bundle.
For me, the three things to keep working are:
* flow
* jest
* bin/bundle to rebuild vendors, reps, worker modules
The debugger frontend itself (src/ folder) is built without involving yarn/package.json.
Differential Revision: https://phabricator.services.mozilla.com/D92735
Various tests disable process preallocation in order to get "stable" process
counts for testing purposes. In order to keep this behaviour, we need to disable
and shut down the recycled E10S process whenever the preallocator is disabled.
This is accomplished by having the preallocator expose its enabled status to
ContentParent, and by having the preallocator trigger ContentParent shutdown
when it is disabled.
This also has the benefit of hooking the recycler back into memory pressure
notifications, meaning it will be shut down if not in use when a memory pressure
event occurs.
Differential Revision: https://phabricator.services.mozilla.com/D88628
This helper method abstracts over the common tasks performed in every normal
shutdown codepath. This will be useful for making process recycling respect
preallocation being disabled.
Differential Revision: https://phabricator.services.mozilla.com/D88627
It is uncommon to have long-lived "web" processes when Fission is enabled, so in
general we probably don't want to be performing content process recycling.
We also skip recycling web processes when E10s is disabled, as the process
preallocator would be disabled in that case, so recycling processes can lead to
mochitest-chrome test failures.
Differential Revision: https://phabricator.services.mozilla.com/D88626
Previously, we would use the PreallocatedProcessManager to handle recycled
"web" content processes, which would remove them from the pool and try to make
them act like "prealloc" content processes.
Unlike true preallocated content processes, we can end up having content loaded
in recycled "web" content processes, such as due to a new pop-up being created
at an inconvenient time during process teardown. The complexity around this
semi-prealloc state caused some assertion failures in other related code.
This new approach doesn't remove the recycled process from the process selection
pool. Instead, after process selection decides to spawn a new "web" process, a
pointer to the specific "recycled" content process is fetched and returned by
the process selection algorithm before asking the preallocator.
This new approach should have similar behaviour, but avoids some of the pitfalls
caused by removing the entry from the process selection pool after it has begun
hosting content.
Differential Revision: https://phabricator.services.mozilla.com/D88625