This patch makes ImageContainer create a SharedSurfacesAnimation object
when it detects that we are using shared surfaces and are producing full
frames.
Differential Revision: https://phabricator.services.mozilla.com/D7505
We were marking them used even if only a decode was requested.
This can cause us to hold extra decoded copies of the image around because we have a tendency to request decode at the intrinsic size.
If we do not pass the high quality scaling flag than the resulting surface will be marked as cannot substitute, which is not accurate, so we don't want.
The only place that actually tries to be smart about the size is nsImageFrame::MaybeDecodeForPredictedSize. All other cases just ask for the intrinsic size.
The two most likely cases are that there are no decoded copies of the image, or there is one decoded (or in progress) copy of the image.
In the first case we will request decode at the instrinsic size, and then if we draw at a different size that draw will request the proper size. This doesn't change with this patch.
In the second case there is a decoded copy already available, this is likely from a draw call on the image, and that is the surface size that we want. So we save a decode. If we are actually drawing the image at two different sizes the second size will be slightly delayed, but we have the wrongly sized copy of the image that we can draw until then. This seems like a good tradeoff to avoid always decoding an instrinic size copy of images.
When generating display lists for WebRender, we were not caching the
draw result via nsDisplayItemGenericImageGeometry::UpdateDrawResult (or
similar) after completing CreateWebRenderCommands. This is important
because reftests use this to force sync decoding for images; it may be a
reason for image-related intermittent failures on *-qr builds.
Additionally, we may have been requesting fallback in cases where fallback
could not do anything more than WebRender could. For example, if we can't
get an image container yet, there is no point in requesting fallback
because it might just be we haven't started decoding yet. We should just
return the actual draw result in such cases.
In addition to the image container, the draw result can also be useful
for callers to know whether or not the surface(s) in the container are
fully decoded or not. This is used in subsequent parts to avoid
flickering in some cases.
It is possible for a decoder's iterator to be invalid in some error
conditions, all related to the ICO decoder seeking behaviour. Since we
assume that the iterator is always valid for the purposes of generating
the decoder's telemetry data, a malformed ICO image could cause a crash.
This patch removes the assumption that the iterator is valid, and
ensures we don't add the decoder's data to telemetry if it is invalid.
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
Regardless of the size of an encoded image, SourceBuffer::Compact would
try to consolidate all of the chunks into a single chunk. If an image is
quite large, it can be actively harmful to do this, because we want a
very large contiguous chunk of memory for no real reason, and spend
extra time on the main thread doing the memcpy/consolidation.
Instead we now cap out the chunk size at 20MB. If we start allocating
chunks of this size, we will not perform compacting when we have
received all of the data. (Save for realloc'ing the last chunk since it
probably isn't full.)
On a related note, if we hit an out-of-memory condition in the middle of
appending data to the SourceBuffer, we would swallow the error. This is
because nsIInputStream::ReadSegments will succeed if any data was
written. This leaves the SourceBuffer out of sync. We now propogate this
error up properly to the higher levels.
fixup
When we need to recreate an animated image decoder because it was
discarded, the animation may have progressed beyond the first frame.
Given that later in the patch series we need FrameAnimator to be driving
the decoding more actively, it simplifies its role by making it assume
the initial state of the decoder matches its initial state. Passing in
the currently displayed frame allows the decoder to advance its frame
buffer (and potentially discard unnecessary frames), such that when the
animation actually wants to advance as it normally would, the decoder
state matches what it would have been if it had never been discarded.
Note that AnimationSurfaceProvider will override these methods to give a
proper implementation in a later patch in this series. For now, they are
mostly stubbed, using the default implementation from ISurfaceProvider.
They focus on the main operations we perform on an animation:
1) Progressing through the animation, e.g. advancing a frame. If we
don't decode the whole animation up front, we need to know at the
decoder level where we are in the display of the animation.
2) Restarting an animation from the beginning. This is a specialized
case of the above, where we want to skip explicitly advancing through
the remaining frames and instead restart at the beginning. The decoder
may have already discarded the earliest frames and must start redecoding
them.
3) Knowing whether or not the decoder is still active, e.g. can we be
missing frames.
If there is an active provider which has yet to produce a frame, any
calls to SurfaceCache::Lookup will return MatchType::PENDING. If
RasterImage::Lookup gets the above result while given FLAG_SYNC_DECODE,
it will attempt to start a new decoder. It is entirely possible that
when we try to insert the new provider into the SurfaceCache, it cannot
because the original provider finally did produce something. In that
case we should abandon attempting to redecode and retry our lookup.
This adds IsImageContainerAvailableAtSize and GetImageContainerAtSize to
the imgIContainer interface, as well as stubbing it for all of the
classes which implement it. The real implementations will follow for the
more complicated classes (RasterImage, VectorImage).
As part of the move, we add a IntSize parameter to
ImageResource::GetCurrentImage. This is because we don't have access to
the image's size (yet) from ImageResource, but additionally because we
will need this anyways when we support multiple image containers at
different sizes.
The only change to the moved implementation is that we no longer have
access to RasterImage::mHasSize and RasterImage::mSize. Thus we rely
upon imgIContainer::IsImageContainerAvailable to perform these checks.
This state will eventually be used by VectorImage when it supports image
containers. For now, it is harmless beyond using slightly more memory
for SVGs.
Most cases where the pointer is stored into an already-declared variable can
trivially be changed to MakeNotNull<T*>, as the NotNull raw pointer will end
up in a smart pointer.
In RAII cases, the target type can be specified (e.g.:
`MakeNotNull<RefPtr<imgFrame>>)`), in which case the variable type may just be
`auto`, similar to the common use of MakeUnique.
Except when the target type is a base pointer, in which case it must be
specified in the declaration.
MozReview-Commit-ID: BYaSsvMhiDi
--HG--
extra : rebase_source : 8fe6f2aeaff5f515b7af2276c439004fa3a1f3ab
Currently we only permit requests from HTTP channels to be retargeted to
the image IO thread. It was implemented this way originally in bug
867755 but it does not appear there was a specific reason for that.
The only kink in this is some browser chrome mochitests listen on debug
build only events to ensure certain chrome images are loaded and/or
drawn. As such, this patch ensures that those observer notifications
continue to be served, requiring a dispatch from the image IO thread to
the main thread.
Another issue to note is that SVGs must be processed on the main thread;
the underlying SVG document can only be accessed from it. We enforce
this by checking the content type. The possibility already exists that
an HTTP response could contain the wrong content type, and in that case,
we fail to decode the image, as there is no content sniffing support for
SVG. Thus there should be no additional risk taken by using the image IO
thread from other non-HTTP channels (if they don't specify the SVG
content type, it is not rendered today, and if they do, it will remain
on the main thread as it is today).
We also ignore data URIs. The specification requires that we process
these images sychronously. See bug 1325080 for details.
When SurfaceCache::Lookup is called to access surface data, it indicates
that the caller will not accept substitutes as in the case of
SurfaceCache::LookupBestMatch. As such, we need to be careful not to
remove those surfaces from our cache when pruning (in part 8b). This is
the marker used to track that, at some point, there was a caller which
got this surface that would accept no other (e.g. factor of 2 mode must
make an accept for this particular surface).