* Deserialization now only happens via a mutator
* The CID for URI implementations actually returns the nsIURIMutator for each class
* The QueryInterface of mutators implementing nsISerializable will now act as a finalizer if passed the IID of an interface implemented by the URI it holds
MozReview-Commit-ID: H5MUJOEkpia
--HG--
extra : rebase_source : 01c8d16f7d31977eda6ca061e7889cedbf6940c2
* Deserialization now only happens via a mutator
* The CID for URI implementations actually returns the nsIURIMutator for each class
* The QueryInterface of mutators implementing nsISerializable will now act as a finalizer if passed the IID of an interface implemented by the URI it holds
MozReview-Commit-ID: H5MUJOEkpia
--HG--
extra : rebase_source : 8ebb459445cab23288a6c4c86e4e00c6ee611e34
After decoding the first frame we allocate the second frame, but before it finishes we encounter an error, Decoder::PostError is called it aborts the second frame and decrements the frame count. But AnimationSurfaceProvider::CheckForFrameAtTerminalState just asks for the current frame ref from the decoder (which it never cleared) and inserts that.
The condition that we use from the decoder to decide to report a new frame is mFinishedNewFrame (via TakeCompleteFrameCount), however this doesn't directly correspond to mFrameCount. So we create a new bool on the Decoder to track when there is a frame that we can take.
This didn't cause any problems before but now we have tighter coupling between the list of frames the AnimationSurfaceProvider has and what FrameAnimator expects.
Another possible fix would be to clear the current frame ref in PostError, but the only place we clear the current frame is when we allocate the new frame and we have the mImageData pointer still around that decoders could theorhetically use to do final processing on the last partial frame.
These threads should not have deep stacks, and as we can have a number
of them running simultaneously, it's beneficial to set the stack size to
something reasonably low.
When cloning an animated image decoder, we asserted that
Decoder::HasAnimation was true. This is incorrect because if the decoder
has yet to complete the metadata decoding, or it has but only finds out
the image is animated when it discovers the second frame, then we will
try to clone a valid animated image decoder, but fail the assertion.
Instead, this patch verifies the image type supports animations.
With the previous parts, for large animated images, we will now discard
previous frames after we reach the threshold. This mochitest configures
a very low threshold, such that it will trigger on a small animated
image. It then verifies that we are already to loop the animation a
couple of times.
When we need to recreate an animated image decoder because it was
discarded, the animation may have progressed beyond the first frame.
Given that later in the patch series we need FrameAnimator to be driving
the decoding more actively, it simplifies its role by making it assume
the initial state of the decoder matches its initial state. Passing in
the currently displayed frame allows the decoder to advance its frame
buffer (and potentially discard unnecessary frames), such that when the
animation actually wants to advance as it normally would, the decoder
state matches what it would have been if it had never been discarded.
Note that AnimationSurfaceProvider will override these methods to give a
proper implementation in a later patch in this series. For now, they are
mostly stubbed, using the default implementation from ISurfaceProvider.
They focus on the main operations we perform on an animation:
1) Progressing through the animation, e.g. advancing a frame. If we
don't decode the whole animation up front, we need to know at the
decoder level where we are in the display of the animation.
2) Restarting an animation from the beginning. This is a specialized
case of the above, where we want to skip explicitly advancing through
the remaining frames and instead restart at the beginning. The decoder
may have already discarded the earliest frames and must start redecoding
them.
3) Knowing whether or not the decoder is still active, e.g. can we be
missing frames.
Later in the patch series, we use the new APIs to facilitate cloning of
an existing decoder. This is useful when you want to redecode the same
image with the exact same configuration but from the very beginning.
The shared memory handle reporting has been generalized to be an
external handle reporting. This is used for both shared memory, and for
volatile memory (on Android.) This will allow us to have a better sense
of just how many handles are being used by images on Android.
Additionally we were not properly reporting forced heap allocated
memory, if we were putting animated frames on the heap. This is because
we used SourceSurfaceAlignedRawData without implementing
AddSizeOfExcludingThis.
image.mem.volatile.min_threshold_kb is the minimum buffer allocation for
an image frame in KB before it will use volatile memory. If it is less
than it will use the heap. This only is set to > 0 on Android.
image.mem.animated.use_heap forces image frames to use the heap if it is
for an animated image. This is only enabled for Android, and was
previously a compile time option also for Android.
Move the initialization of SharedSurfacesParent from the compositor
thread creation to mirror the other WebRender-specific components, such
as the render thread creation. Now it will only be created if WebRender
is in use. Also prevent shared surfaces from being used by the image
frame allocator, even if image.mem.shared is set -- there is no purpose
in allowing this at present. It was causing startup crashes for users
who requested image.mem.shared and/or WebRender via gfx.webrender.all
but did not actually get WebRender at all. Surfaces would get allocated
in the shared memory, try to register themselves with the WR render
thread, and then crash since that thread was never created.
The image decoding thread pool can grow to be quite large, up to 32
threads, depending on the number of processors on the system. If the
user is not actively browsing, these threads are occupying resources
which could be reused elsewhere. After the timeout period, it will
release up to half of the threads in the pool.
Currently imagelib's DecodePool spawns the maximum number of threads
during startup, based on the number of processors. This patch changes it
to spawn a single thread on startup (which cannot fail), and more up to
the maximum as jobs are added to the queue. A thread will only be
spawned if there is a backlog present when a new job is added. This
typically results in fewer threads allocated in the parent process, as
well as deferred spawning in the content processes.
Originally we attempted to finalize the current frame from the contained
decoder in nsICODecoder::FinishResource. This is wrong because we
haven't acquired the frame from the contained decoder yet. This happens
in nsICODecoder::GetFinalStateFromContainedDecoder, and so
imgFrame::Finalize call should be moved there. This was causing us to
use fallback image sharing with WebRender after a GPU process crash,
instead of shared surfaces, because it can't get a new file handle for
the surface data until we have finished writing all of the image data.
Move the initialization of SharedSurfacesParent from the compositor
thread creation to mirror the other WebRender-specific components, such
as the render thread creation. Now it will only be created if WebRender
is in use. Also prevent shared surfaces from being used by the image
frame allocator, even if image.mem.shared is set -- there is no purpose
in allowing this at present. It was causing startup crashes for users
who requested image.mem.shared and/or WebRender via gfx.webrender.all
but did not actually get WebRender at all. Surfaces would get allocated
in the shared memory, try to register themselves with the WR render
thread, and then crash since that thread was never created.