WebGPU uses CompositableInProcessManager to push TextureHost directly from WebGPUParent to WebRender. But CompositableInProcessManager plumbing has a problem and caused Bug 1805209.
gecko already has a similar mechanism, called RemoteTextureMap. It is used in oop WebGL. If WebGPU uses RemoteTextureMap instead of CompositableInProcessManager, both WebGPU and oop WebGL use same mechanism.
WebGPUParent pushes a new texture to RemoteTextureMap. The RemoteTextureMap notifies the pushed texture to WebRenderImageHost.
Before the change, only one TextureHost is used for one swap chain. With the change, multiple TextureHosts are used for one swap chain with recycling.
The changes are followings.
- Use RemoteTextureMap instead of CompositableInProcessManager.
- Use RemoteTextureOwnerId instead of CompositableHandle.
- Use WebRenderCanvasData instead of WebRenderInProcessImageData.
- Add remote texture pushed callback functionality to RemoteTextureMap. With it, RemoteTextureMap notifies a new pushed remote texture to WebRenderImageHost.
- Remove CompositableInProcessManager.
Differential Revision: https://phabricator.services.mozilla.com/D164890
Although not needed right now (checkerboarding backgrounds get
a slice anyway due to being a different scroll root), this will
be important for the upcoming work to make backdrop filter
roots implicit. This allows WR to know when slicing up a content
slice if the prim is relevant to the backdrop root.
Differential Revision: https://phabricator.services.mozilla.com/D146145
bug 1733732 decreased the size of the display port on Android. When you scroll to the bottom of the page, the canvas leaves the display port. It triggers to destroy WebRenderCanvasData and WebRenderCanvasRendererAsync. And then RenderAndroidSurfaceTextureHost::NotifyNotUsed() is called and RenderAndroidSurfaceTextureHost is destroyed.
Then if scrolling makes the canvas into the display port again, WebRenderCanvasData, WebRenderCanvasRendererAsync and RenderAndroidSurfaceTextureHost are recreated again. But there is no rendering update at SharedSurface_SurfaceTexture. Since the page does WebGL rendering only once during page load.
It caused the problem to RenderAndroidSurfaceTextureHost. RenderAndroidSurfaceTextureHost::NotifyNotUsed() returns SurfaceTexture's buffer to client side. For using SurfaceTexture again in RenderAndroidSurfaceTextureHost, Client side needs to do re-rendering to SurfaceTexture. But SharedSurface_SurfaceTexture did nothing in this case.
To address the problem, we could hold layers::CanvasRenderer in ClientWebGLContext::mNotLost. If WebRenderCanvasRendererAsync is kept alive, RenderAndroidSurfaceTextureHost::NotifyNotUsed() and destruction of WebRenderCanvasRendererAsync do not happen.
Then if WebRenderCanvasData is re-created, the stored WebRenderCanvasRendererAsync is set in the new WebRenderCanvasData in ClientWebGLContext::UpdateWebRenderCanvasData().
Differential Revision: https://phabricator.services.mozilla.com/D143811
We need them for SVG primitives.
This patch adds a bit of plumbing to disable snapping some of the primitives and forcing the antialiasing shader feature where needed, and uses it for SVG solid rectangles and images.
Differential Revision: https://phabricator.services.mozilla.com/D139024
This patch removes more main thread dependencies from the content side
of WebGPU. Instead of issuing a resource update for an external image,
we now use an async image pipeline in conjunction with
CompositableInProcessManager from part 1. This allows us to update the
HTMLCanvasElement bound to the WebGPU device without having to go
through the main thread, or even the content process after the swap
chain update / readback has been requested.
Differential Revision: https://phabricator.services.mozilla.com/D138887
For WebGPU, we produce the textures in the compositor process and the
content process doesn't need to be that involved except for hooking up
the texture to the display list. Currently this is done via an external
image ID.
Given that WebGPU needs to work with OffscreenCanvas, it would be best
if its display pipeline was consistent whether it was gotten from an
HTMLCanvasElement, OffscreenCanvas on the main thread, or on a worker
thread. As such, using an AsyncImagePipeline would be best.
However there is no real need to bounce the handles across process
boundaries. Hence this patch which adds CompositableInProcessManager.
This static class is responsible for collecting WebRenderImageHost
objects backed by TextureHost objects which do not leave the compositor
process. This will allow WebGPUParent to schedule compositions directly
in future patches.
Differential Revision: https://phabricator.services.mozilla.com/D138588
This patch removes more main thread dependencies from the content side
of WebGPU. Instead of issuing a resource update for an external image,
we now use an async image pipeline in conjunction with
CompositableInProcessManager from part 1. This allows us to update the
HTMLCanvasElement bound to the WebGPU device without having to go
through the main thread, or even the content process after the swap
chain update / readback has been requested.
Differential Revision: https://phabricator.services.mozilla.com/D138887
For WebGPU, we produce the textures in the compositor process and the
content process doesn't need to be that involved except for hooking up
the texture to the display list. Currently this is done via an external
image ID.
Given that WebGPU needs to work with OffscreenCanvas, it would be best
if its display pipeline was consistent whether it was gotten from an
HTMLCanvasElement, OffscreenCanvas on the main thread, or on a worker
thread. As such, using an AsyncImagePipeline would be best.
However there is no real need to bounce the handles across process
boundaries. Hence this patch which adds CompositableInProcessManager.
This static class is responsible for collecting WebRenderImageHost
objects backed by TextureHost objects which do not leave the compositor
process. This will allow WebGPUParent to schedule compositions directly
in future patches.
Differential Revision: https://phabricator.services.mozilla.com/D138588
This patch ensures that we only update the external image resource for
WebGPU when there has been an actual change for the resource. In order
to guarantee this, we wait for the present to complete, and only then
issue the update. WebRenderBridgeChild::SendResourceUpdates will also
trigger a frame generation if any resources were changed, which means we
don't need to trigger a paint on the frame itself anymore.
Note that we still have a race condition when we write into the
MemoryTextureHost while in PresentCallback, and the renderer thread may
be accessing the pixel data to upload to the GPU.
Differential Revision: https://phabricator.services.mozilla.com/D138349
Part of how invalidation works with WebRender is that we assume frames
with a WebRenderUserData object attached to them are in view. This means
for images that we must ensure we create an empty
WebRenderImageProviderData object even when we have no provider or
surface for display. This will allow us to invalidate properly when we
get the FRAME_COMPLETE notification from imagelib indicating that the
redecode has completed.
Differential Revision: https://phabricator.services.mozilla.com/D135077
Previously with ImageContainers, we would put the new preferred surface
into the ImageContainer. When we check if we should invalidate, it would
have a different image key, and hence invalidate the image frame and
schedule a paint.
With ImageProviders, it returns the same key in this case, because the
ImageProvider represents a particular surface. As such, we need to
actually track when we get a substituted ImageProvider, and invalidate
the image frame more aggressively to ensure we get the preferred size.
Differential Revision: https://phabricator.services.mozilla.com/D132583
Other items types that support flattening return false from CanHandleOpacity if their CreateWebRenderCommands implementation will fail. That's really hard to determine in advance for text, so it's simpler to just handle the opacity in the fallback path too.
Differential Revision: https://phabricator.services.mozilla.com/D125634
* Majorly simplity CanvasRenderer
* Replace GLScreenBuffer with trivial GLSwapChain
* Use descriptor structs so that future SharedSurface changes aren't so painful
to propagate
* Mortgage/strip out more OffscreenCanvas code for now
Differential Revision: https://phabricator.services.mozilla.com/D75055
When a transform depends on the layout size of an element, one can see
visual distortions caused by the difference between the unsnapped size
used in the transform, and the snapped size calculated during scene
building. Ideally we could compute the transform after we snap, rather
than before. This patch adds support for a computed reference frame
which takes parameters to calculate the ideal transform dynamically.
In a future patch, we should make videos take advantage of this same
mechanism to avoid similar problems. This requires support for mirroring
and rotations.
Differential Revision: https://phabricator.services.mozilla.com/D77956
* Majorly simplity CanvasRenderer
* Replace GLScreenBuffer with trivial GLSwapChain
* Use descriptor structs so that future SharedSurface changes aren't so painful
to propagate
* Mortgage/strip out more OffscreenCanvas code for now
Differential Revision: https://phabricator.services.mozilla.com/D75055
* Majorly simplity CanvasRenderer
* Replace GLScreenBuffer with trivial GLSwapChain
* Use descriptor structs so that future SharedSurface changes aren't so painful
to propagate
* Mortgage/strip out more OffscreenCanvas code for now
Differential Revision: https://phabricator.services.mozilla.com/D75055
* Majorly simplity CanvasRenderer
* Replace GLScreenBuffer with trivial GLSwapChain
* Use descriptor structs so that future SharedSurface changes aren't so painful
to propagate
* Mortgage/strip out more OffscreenCanvas code for now
Differential Revision: https://phabricator.services.mozilla.com/D75055
This change enables light tracking of the commands and submissions
that affect a CanvasContext. Upon reaching the GPUQueue, they send
a signal for the parent HTML Element to be invalidated.
We are also invalidating the HTML Element and requesting a new
frame to be built on the creation of the swapchain.
Differential Revision: https://phabricator.services.mozilla.com/D71194