This was observed in an intermittent failure of image/test/mochitest/test_discardAnimatedImage.html. What happened was:
1) Document::MaybePreLoadImage was called for the images in the test.
2) imgRequest::OnDataAvailable is called on at least one of the images. This creates the RasterImage, so any proxy for this imgRequest will now return the image via GetImage(). imgRequest::OnDataAvailable also queues the FinishPreparingForNewPartRunnable back to the main thread to call OnImageAvailable on the progress tracker on the main thread.
3) We get the actual LoadImage calls for the images of the document. We create new proxies for the existing imgRequests. imgRequestProxy::Init calls mBehaviour->SetOwner(aOwner), which sets mOwnerHasImage to true because the progress tracker has an mImage (the one we created above).
4) We get a call to LockImage, this gets forwarded to the RasterImage because mOwnerHasImage is true and we can access the image.
5) The FinishPreparingForNewPartRunnable finally runs on the main thread. The OnImageAvailable notification from the progress tracker ends up in imgRequestProxy::SetHasImage. imgRequestProxy::SetHasImage applies our local count mLockCount to the RasterImage, even though we've already forwarded one of those LockImage calls to the image. LockImage calls are now unbalanced and the image will always remain locked.
The fix is simple. Only apply the Lock/Unlock calls if the FinishPreparingForNewPartRunnable has hit the main thread (ie ignore an image we can access until this happens).
Differential Revision: https://phabricator.services.mozilla.com/D29326
--HG--
extra : moz-landing-system : lando
We have a better type to represent "a coord or nothing", and that's Maybe.
This code is shorter, and I think reads generally better / is less easy to
misuse.
I wrote this on top of bug 1547126 so there shouldn't be conflicts.
Differential Revision: https://phabricator.services.mozilla.com/D28921
--HG--
extra : moz-landing-system : lando
This patch moves remaining public `enum` of `nsIPresShell` to `mozilla`
namespace in `mozilla/PresShellForwards.h` and make them `enum class`es.
Additionally, some methods which use the moving `enum`s from `nsIPresShell`
to `PresShell`.
Differential Revision: https://phabricator.services.mozilla.com/D28607
--HG--
extra : moz-landing-system : lando
Changes:
- added comments for tests being disabled
- disabled two additional tests in order to green the run
Differential Revision: https://phabricator.services.mozilla.com/D28085
--HG--
extra : moz-landing-system : lando
Changes:
- most tests are skipped using `moz.build` configuration file.
- `MultiWriterQueue` had to be skipped with `define` clauses in the test file due to build bustages when its `moz.build` file was used.
Differential Revision: https://phabricator.services.mozilla.com/D27944
--HG--
extra : moz-landing-system : lando
SVG performance with the fallback path with WebRender is very bad. This
patch avoids fallback by always producing a rasterized surface we store
in SurfaceCache, but also clamping the size consistently to a configured
maximum. This will cause us to upscale rasterized SVGs which is
undesirable visually but is a lower risk change that we can uplift to
beta than fixing the underlying performance issue.
Differential Revision: https://phabricator.services.mozilla.com/D27159
This excludes dom/, otherwise the file size is too large for phabricator to handle.
This is an autogenerated commit to handle scripts loading mochitest harness files, in
the simple case where the script src is on the same line as the tag.
This was generated with https://bug1544322.bmoattachments.org/attachment.cgi?id=9058170
using the `--part 2` argument.
Differential Revision: https://phabricator.services.mozilla.com/D27456
--HG--
extra : moz-landing-system : lando
Additionally, this patch makes `nsDocumentViewer` which is the only
implementation of `nsIContentViewer` use `mozilla::PresShell` directly
rather than via `nsIPresShell`.
Differential Revision: https://phabricator.services.mozilla.com/D27470
--HG--
extra : moz-landing-system : lando
Disable gtests observed to fail on Android. Some of these are simple build
failures and failures due to file permissions or paths, while other failures
are more obscure.
Once Android gtests are running on mozilla-central, I will file follow-up
bugs inviting teams to investigate the failures and re-enable Android gtests
that are important to them.
Differential Revision: https://phabricator.services.mozilla.com/D26606
--HG--
extra : moz-landing-system : lando
Except retrieving from weak reference, `nsIFrame` should treat
`mozilla::PresShell` directly rather than via `nsIPresShell`.
Differential Revision: https://phabricator.services.mozilla.com/D26388
--HG--
extra : moz-landing-system : lando
Changes:
- skipped problematic tests in crashtest suite on windows10-aarch64
- removed unnecessary pixel fuzzy values from previous iterations of greening tests
Differential Revision: https://phabricator.services.mozilla.com/D25714
--HG--
extra : moz-landing-system : lando
The img decode API allows a web author to request that an image be
decoded at its intrinsic size and be notified when it has been
completed. This is useful to ensure an image is ready to display before
adding it to the DOM tree -- this will help reduce flickering.
Differential Revision: https://phabricator.services.mozilla.com/D11362
Bug 1503653 added webp images to this test but didn't update everywhere necessary in the test file.
This doesn't fix any intermittents with this test as far as I know.
Differential Revision: https://phabricator.services.mozilla.com/D25527
--HG--
extra : moz-landing-system : lando
nsIChannel.LOAD_CLASSIFY_URI is no longer required so we can remove it from
the codebase.
In the mean time, we add a new LOAD_BYPASS_URL_CLASSIFIER load flag for
channel creator to be able to force channel to bypass URL classifier check.
The use of the new LOAD_BYPASS_URL_CLASSIFIER flag will be addressed in
the other patches.
Differential Revision: https://phabricator.services.mozilla.com/D22111
--HG--
extra : moz-landing-system : lando
nsIChannel.LOAD_CLASSIFY_URI is no longer required so we can remove it from
the codebase.
In the mean time, we add a new LOAD_BYPASS_URL_CLASSIFIER load flag for
channel creator to be able to force channel to bypass URL classifier check.
The use of the new LOAD_BYPASS_URL_CLASSIFIER flag will be addressed in
the other patches.
Differential Revision: https://phabricator.services.mozilla.com/D22111
--HG--
extra : moz-landing-system : lando
libpng uses the first IDAT chunk it encounters as a signal that it has read all header chunks and to send the info callback.
The testcase png has an IDAT chunk, then a z chunk (not a known chunk type), and then another IDAT chunk.
libpng tracks if we are in an "after idat" state, and throws a benign error if it encounters another IDAT chunk in "after idat" mode, but it just continues normally, processing the idat chunk as if it were the first and therefore sends the info callback again. This seems silly.
https://searchfox.org/mozilla-central/rev/f1c7ba91fad60bfea184006f3728dd6ac48c8e56/media/libpng/pngpread.c#307
r?njn only because this is the first example that adds any actual subcategories.
Differential Revision: https://phabricator.services.mozilla.com/D11340
--HG--
extra : moz-landing-system : lando
In the original Windows clipboard BMP decoder implementation in
nsImageFromClipboard::ConvertColorBitMap, if the bitmap used bitfields
compression, it always adjusted the offset to the RGB data by 12 bytes.
It did this even for newer BMP header formats which explicitly include
space for the bitfields in their header sizes. This patch updates our
BMP decoder to do the same for clipboard BMPs, since we have observed
pasted BMPs using bitfield compression appearing incorrectly. To the
user this appears as if we read a color mask; completely red, blue,
green pixels at the start of the last row, causing all of the other rows
to start with the last three pixels of the previous row.
Differential Revision: https://phabricator.services.mozilla.com/D19955
When the underlying image request (imgIRequest) changes for an image, we
need to ensure that we invalidate the cached WebRenderImageData such that
the image container stored therein is updated to be for the correct
image. This gets a little tricky because some display items store both
the current and previous images, and choose to display the latter if the
former is not yet ready. We also don't know what image the image
container belongs to. As such, we now compare the producer ID of the
current frame in the image container, to the expected producer ID of the
current image request. If they don't match, we must regenerate the
display list.
Differential Revision: https://phabricator.services.mozilla.com/D19699
Replacing js and text occurences of asyncOpen2
Replacing open2 with open
Differential Revision: https://phabricator.services.mozilla.com/D16885
--HG--
rename : layout/style/test/test_asyncopen2.html => layout/style/test/test_asyncopen.html
extra : moz-landing-system : lando
***
Bug 1514594: Part 3a - Change ChromeUtils.import to return an exports object; not pollute global. r=mccr8
This changes the behavior of ChromeUtils.import() to return an exports object,
rather than a module global, in all cases except when `null` is passed as a
second argument, and changes the default behavior not to pollute the global
scope with the module's exports. Thus, the following code written for the old
model:
ChromeUtils.import("resource://gre/modules/Services.jsm");
is approximately the same as the following, in the new model:
var {Services} = ChromeUtils.import("resource://gre/modules/Services.jsm");
Since the two behaviors are mutually incompatible, this patch will land with a
scripted rewrite to update all existing callers to use the new model rather
than the old.
***
Bug 1514594: Part 3b - Mass rewrite all JS code to use the new ChromeUtils.import API. rs=Gijs
This was done using the followng script:
https://bitbucket.org/kmaglione/m-c-rewrites/src/tip/processors/cu-import-exports.jsm
***
Bug 1514594: Part 3c - Update ESLint plugin for ChromeUtils.import API changes. r=Standard8
Differential Revision: https://phabricator.services.mozilla.com/D16747
***
Bug 1514594: Part 3d - Remove/fix hundreds of duplicate imports from sync tests. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16748
***
Bug 1514594: Part 3e - Remove no-op ChromeUtils.import() calls. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16749
***
Bug 1514594: Part 3f.1 - Cleanup various test corner cases after mass rewrite. r=Gijs
***
Bug 1514594: Part 3f.2 - Cleanup various non-test corner cases after mass rewrite. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16750
--HG--
extra : rebase_source : 359574ee3064c90f33bf36c2ebe3159a24cc8895
extra : histedit_source : b93c8f42808b1599f9122d7842d2c0b3e656a594%2C64a3a4e3359dc889e2ab2b49461bab9e27fc10a7
Originally we would invalidate image containers synchronously when
SVGRootRenderingObserver::OnRenderingChange was called. However at this
point in time, the layout tree is in the middle of updating its own
state, and triggering a paint will be problematic. Animated vector
images did not suffer from this problem because they would defer to the
next refresh tick, but non-animated do not receive refresh tick events.
As such, we should just defer the invalidation to immediately after the
layout tree has finished updating itself.
Differential Revision: https://phabricator.services.mozilla.com/D15668
If a vector image has an image container, it is unlikely the caller will
call VectorImage::Draw (and thus Show indirectly) to display the image.
As such, WebRender was missing subsequent invalidations and not
regenerating the rasterized surface as expected. Thus we now resume
honoring the invalidations if we updated the image container.
Differential Revision: https://phabricator.services.mozilla.com/D15667
Given the crash resolved in part 1, it is possible for the blob
rasterizer in the compositor process to still be using surfaces after
the animation has advanced to the next frame. With recycling this can be
problematic as the recycled surface will be reused for a future frame.
In an ideal world, the blob recording would use the animation's image
key instead, but the rasterizer doesn't have easy access to the mapping
table. As such, for any frames used in a blob recording, we now
explicitly mark them as non-recyclable and we will be forced to allocate
a new frame instead.
Differential Revision: https://phabricator.services.mozilla.com/D16192
SchemeIs only throws exceptions on null arguments now. Assert
arguments, as they should never be null anyways, and create an
infallible C++ version.
Differential Revision: https://phabricator.services.mozilla.com/D16143
--HG--
extra : moz-landing-system : lando
Summary: Really sorry for the size of the patch. It's mostly automatic
s/nsIDocument/Document/ but I had to fix up in a bunch of places manually to
add the right namespacing and such.
Overall it's not a very interesting patch I think.
nsDocument.cpp turns into Document.cpp, nsIDocument.h into Document.h and
nsIDocumentInlines.h into DocumentInlines.h.
I also changed a bunch of nsCOMPtr usage to RefPtr, but not all of it.
While fixing up some of the bits I also removed some unneeded OwnerDoc() null
checks and such, but I didn't do anything riskier than that.
This needs to add a few of includes in other places which were relying on the
massive (now gone) list in nsDocument.h.
I also needed to move an AnimationTimeline destructor out of line because it
relied on dom::Animation being defined, yet Animation.h includes
AnimationTimeline.h, so include hell.
Differential Revision: https://phabricator.services.mozilla.com/D15366
This is a big step in order to merge both.
Also allows to remove some very silly casts, though it causes us to add some
ToSupports around to deal with ambiguity of casts from nsIDocument to
nsISupports, and add a dummy nsISupports implementation that will go away later
in the series.
Differential Revision: https://phabricator.services.mozilla.com/D15352
Both libjpeg and windows.h typedef `boolean` to different types (`int` and
`unsiched char` respectively) and nsJPEGEncoder's public definition includes a
function that returns a `boolean`. Exposing this header results in type
conflicts.
We now isolate the internals of nsJPEGEncoder into a friend class whose
internals are hidden from the publica, allowing the header to exported.
Differential Revision: https://phabricator.services.mozilla.com/D14638
--HG--
extra : moz-landing-system : lando
MozPromise most common use is to have an single or exclusive listener. By making the MozPromise generated by IPDL exclusive we can also use move semantics.
While at it, we also use move semantics for the ResponseRejectReason and via the callback's reject method so that the lambda used with the MozPromise::Then can be identical to the one used by the IPDL callback.
As it currently is, it provides no advantage over a copy as it's just an enum; however, this will facilitate future changes where it may not be.
Differential Revision: https://phabricator.services.mozilla.com/D13906
--HG--
extra : moz-landing-system : lando
When an animation is reset, we can actually avoid the reallocations by
keeping the frames in the mRecycle and mDiscard queues. There is no real
need to do this. We just need to make sure the recycle rect calculation
is done appropriately, particularly when we haven't reached the end yet
and thus don't know the first frame refresh area yet.
Differential Revision: https://phabricator.services.mozilla.com/D13465
The size was originally incremented in AnimationFrameBuffer::Insert
however if an animation was reset before we finished decoding, it would
count some frames twice in the counter. Now we increment it inside
InsertInternal, where AnimationFrameDiscardingQueue can make a more
informed decision on whether the frame is a duplicate or not.
Additionally we now fail explicitly when we insert more frames on
subsequent decodes than the original decoders. This will help avoid
getting out of sync with FrameAnimator.
Differential Revision: https://phabricator.services.mozilla.com/D13464
If an animated frame buffer was reset just before the necessary frame to
cross the discard threshold, followed by said frame being inserted by
the decoder, it would insert a null pointer into the display queue for
the first frame. This is because it assumed that we have always advanced
past the first frame -- which was true, but the reset placed us back at
the beginning.
This would initially manifest to the user as the animation stopping,
since it could not advance past the first frame. Once a memory report
was requested, it would crash because we assume every frame in the
display queue is valid.
This patch removes the assumption about what frame we have advanced to.
Differential Revision: https://phabricator.services.mozilla.com/D13407
- modify line wrap up to 80 chars; (tw=80)
- modify size of tab to 2 chars everywhere; (sts=2, sw=2)
--HG--
extra : rebase_source : 7eedce0311b340c9a5a1265dc42d3121cc0f32a0
extra : amend_source : 9cb4ffdd5005f5c4c14172390dd00b04b2066cd7
This is a best effort attempt at ensuring that the adverse impact of
reformatting the entire tree over the comments would be minimal. I've used a
combination of strategies including disabling of formatting, some manual
formatting and some changes to formatting to work around some clang-format
limitations.
Differential Revision: https://phabricator.services.mozilla.com/D13193
--HG--
extra : moz-landing-system : lando
When bug 1508393 landed, it not only enabled producing of full frames
on the decoder threads, it also enabled recycling of old animated image
frame buffers it would have otherwise discarded. It reuses the contents
of the buffer where possible given we know what pixels changed between
the old frame and the frame we want to produce. However where this
calculation was done was incorrect. We originally calculated it when we
advanced the frame, but at that point there is no guarantee that we have
all of the necessary information; we may have fallen behind on decoding.
As such, we move the calculation to where we actually perform the
recycling. At this point we are guaranteed to have all the necessary
frames between the recycling and display queues.
Differential Revision: https://phabricator.services.mozilla.com/D12903
WebRender takes longer than OMTP to release its hold on the current
frame. This is because it is in a separate process and holds onto the
surface in between rendering frames, rather than getting a reference for
each repaint. This patch makes us less aggressive about taking the most
recent surface placed in the recycling queue out to avoid blocking on
waiting for the surface to be released.
Differential Revision: https://phabricator.services.mozilla.com/D10903
Also, fix a minor bug where when we discard an animated image that is a
full frame animated image, we need to reset
AnimationState::mCompositedFrameInvalid to false, just like we do for
animated images blended by FrameAnimator. This is because it is used as
part of our state checking in FrameAnimator::GetCompositedFrame before
we are willing to yield frame data.
Differential Revision: https://phabricator.services.mozilla.com/D12362
Bug 1249474 suggested that we add image/webp to the front of the Accept
header for images, to indicate to servers that we actually support WebP.
Differential Revision: https://phabricator.services.mozilla.com/D8120
Redecode errors break the state machine of FrameAnimator, since the
decoder and the animation state are now out of sync. Going forward this
tight coupling should be eliminated, as the decoder will produce full
frames and the animator can just take the current frame without worrying
about its relative position. For the moment, we should just not reset an
animation if it hit a redecode error (likely due to OOM) just like how
we already stop advancing the animation before the reset.
Differential Revision: https://phabricator.services.mozilla.com/D11046
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
--HG--
extra : source : 803b224d52a0940b4fb4b3b9cffc6a1fa6e5d4ee
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
There were two unrelated buffering problems in nsWebPDecoder. The first
was with the decoder contract. We are expected to loop until the
iterator is unable to provide more data, and wait for the SourceBuffer
to reschedule us, where as nsWebPDecoder::DoDecode only did one pass.
Thus when something yielded wanting more data, we would just wait
forever.
The second was the integration with the libwebp API. We are expected to
retry when we receive SUSPENDED from the decoder, as it decided to yield
pixels instead of continuing to decode as many as possible.
The tests did not cover the first problem because multi chunk decoder
tests do not use SourceBuffer scheduling. This is an oversight. They now
will write a chunk of data, let the SourceBuffer reschedule the decoder,
and repeat until all of the data has been written.
The tests did not cover the second problem because all of the reference
WebP images are too small. This patch adds a new test with a large WebP
image (converted from a Mozilla all hands photo of lanyards). This
should actually trigger the SUSPEND behaviour of libwebp.
Differential Revision: https://phabricator.services.mozilla.com/D10817
For decoders which produce unpaletted partial frames (APNG, WebP), the
surface format should always be BGRA. These frames while partial, are
the same size as the output size of the animated image. When
FrameAnimator performs the blend with the compositing frame, it expects
all pixels we don't care about to be set to fully transparent. If it is
BGRX, they will be set to solid white instead.
Differential Revision: https://phabricator.services.mozilla.com/D10753
This patch makes ImageContainer create a SharedSurfacesAnimation object
when it detects that we are using shared surfaces and are producing full
frames.
Differential Revision: https://phabricator.services.mozilla.com/D7505
First we did not handle the SourceBufferIterator::WAITING state which
can happen when we get woken up but there is no data to read from the
SourceBufferIterator. StreamingLexer handled this properly by yielding
with NEED_MORE_DATA, and properly scheduling the decoder to resume. This
patch does the same in the WebP decoder.
Second nsWebPDecoder::GetType was not implemented. This meant it would
return DecoderType::UNKNOWN, and would fail to recreate the decoder if
we are discarding frames and need to restart from the beginning. In
addition to implementing that method, this patch also corrects an assert
in DecoderFactory::CloneAnimationDecoder which failed to check for WebP
as a supported animated decoder.
This patch also modestly improves the logging output and library method
checks.
Differential Revision: https://phabricator.services.mozilla.com/D10624
This fixes several tests which snapshot remote windows under Fission. It also
changes some other arbitrary tests that don't use remote windows, which I
changed before I gave up on having an always-async API.
Differential Revision: https://phabricator.services.mozilla.com/D41630
--HG--
extra : rebase_source : 6203b7065f7651e6ed4a2695ff2bd92daec70634
We should only assert that the caller is requesting the first frame or
we have advanced to or beyond the expected initial frame, when we
successfully return a frame. This is because FrameAnimator will request
on refresh ticks for the current frame again, until it observes it. If
decoding is still behind, then we likely still have frames to
auto-advance, and we will trip the assert.
Differential Revision: https://phabricator.services.mozilla.com/D9507
This is what we have been working towards in all of the previous parts
in the series. This subclasses AnimationFrameDiscardingQueue to save the
discarded frames for recycling by the decoder, if the frame is marked as
supporting recycling.
Differential Revision: https://phabricator.services.mozilla.com/D7516
AnimatedFrameDiscardingQueue subclasses AnimationFrameBuffer to allow a
cleaner abstraction over the behaviour change when we cross the
threshold of too high a memory footprint for an animated image. The next
patch will build on top of this to provide an abstraction to reuse the
discarded frames.
Differential Revision: https://phabricator.services.mozilla.com/D7515
This patch makes AnimationSurfaceProvider use the new abstractions
provided by AnimationFrameBuffer and AnimationFrameRetainedBuffer to
provide storage and lifetime management of decoders and the produced
frames. We initially start out with an implementation that will just
keep every frame forever, like our historical behaviour. The next patch
will add support for discarding.
Differential Revision: https://phabricator.services.mozilla.com/D7514
In the next few patches, AnimationFrameBuffer will be reworked to split
the discarding behaviour from the retaining behaviour. Once implemented
as separate classes, it will allow easier reuse of the discarding code
for recycling.
Differential Revision: https://phabricator.services.mozilla.com/D7513
The owner for the decoder may implement IDecoderFrameRecycler to allow
the decoder to request a recycled frame instead of allocating a new one.
If none are available, it will fallback to allocating a new frame.
Not only may the IDecoderFrameRecycler not have any frames available for
recycling, the recycled frame itself may still be in use by other
entities outside of imagelib. Additionally it may still be required by
BlendAnimationFilter to restore the previous frame's data. It may even
be the same frame as to restore. In the worst case, we will simply
choose to allocate an entirely new frame, just like before.
When we allocate a new frame, that means the old frame we tried to
recycle will be taken out of circulation and not reused again,
regardless of why it failed.
Differential Revision: https://phabricator.services.mozilla.com/D7512
Beyond the necessary reinitialization methods, we need to protect
ourselves from recycling a frame that some other entity in the browser
is still using. Generally speaking the animated surface will only be
used in imgFrame::Draw since we don't layerize animated images, which
will be safe. However with OMTP or blob recordings, we could retain a
reference to the surface outside the current stack context. Additional
if something calls RasterImage::GetImageContainer(AtSize) or
RasterImage::GetFrame(AtSize), it may also have a reference to the
surface for an indetermine period of time.
As such, if an imgFrame is a candidate for recycling, it will wrap
imgFrame::mLockedSurface in a RecyclingSourceSurface. Its job is to
track how many consumers there are still of the surface, so that after
we advance the animation, the decoder will know if there are still
outstanding consumers.
If the surface is still in use, it will block for a finite period of
time (the refresh interval) before giving up on reclaiming the surface,
and will allocate a new surface. The old surface can then remain in
circulation for as long as necessary without further blocking the
animation progression, since we stop recycling that surface/imgFrame.
Differential Revision: https://phabricator.services.mozilla.com/D7511
Since imgFrame::Draw will limit the drawing to only look at pixels that
we have written to and posted an invalidation for, there is no need to
hold the monitor while doing so. By taking the most expensive operation
outside the lock, we will minimize our chances of hitting contention
with the decoder thread.
A later part in this series will require that a surface be freed outside
the lock because it may end up reacquiring it. In addition to the
contention win, this change should facilitate that.
Differential Revision: https://phabricator.services.mozilla.com/D7510
If we discard a frame during decoding, e.g. due to an error, then we
don't want to take that frame into account for the first frame refresh
area. We should also be handling partial frames here (the dirty rect
needs to encompass the rows that did not get written with actual pixel
data). The only place that can have the necessary information is at the
end rather than at the beginning.
Differential Revision: https://phabricator.services.mozilla.com/D7509
The clear rect and the recycle rect can overlap, and depending on the
size of the clear rect, it could be a significant amount of data that
needs to be copied from the restore frame. This patch minimizes the
copying for a row which contains both the recycle rect and the clear
rect.
Differential Revision: https://phabricator.services.mozilla.com/D7508
Given an invalidation rect, called the recycle rect, which represents
the area which may have changed between the current frame and the frame
we are recycling, we can not only reuse the buffer itself to avoid an
allocation and free, we can also avoid copying pixel data from the
restore frame which is already set.
Differential Revision: https://phabricator.services.mozilla.com/D7507
When blending full frames off the main thread, FrameAnimator no longer
requires access to the raw data of the frame to advance the animation.
Now we only request a RawAccessFrameRef for the current/next frames when
we have discovered that we need to do blending on the main thread.
In addition to avoiding the mutex overhead of RawAccessFrameRef, this
will also facilitate potentially optimizing the surfaces for the
DrawTarget for individual animated image frames.
Differential Revision: https://phabricator.services.mozilla.com/D7506
We were marking them used even if only a decode was requested.
This can cause us to hold extra decoded copies of the image around because we have a tendency to request decode at the intrinsic size.
Calls to do_QueryInterface to a base class can be replaced by a static
cast, which is faster.
Differential Revision: https://phabricator.services.mozilla.com/D7224
--HG--
extra : moz-landing-system : lando
If class A is derived from class B, then an instance of class A can be
converted to B via a static cast, so a slower QI is not needed.
Differential Revision: https://phabricator.services.mozilla.com/D6861
--HG--
extra : moz-landing-system : lando
The lack of clarity over which functions initiate observing and which don't
is a headache since it makes it hard to reason about what's going on. This
rename makes it explicit in the function names.
Differential Revision: https://phabricator.services.mozilla.com/D7187
--HG--
extra : rebase_source : 1f2f86423a9bee7843533c09b3ea78416b233bcd
extra : amend_source : a89125d6a3b7b75a4056c4d600de74a5386ac4ff
If we do not pass the high quality scaling flag than the resulting surface will be marked as cannot substitute, which is not accurate, so we don't want.
The only place that actually tries to be smart about the size is nsImageFrame::MaybeDecodeForPredictedSize. All other cases just ask for the intrinsic size.
The two most likely cases are that there are no decoded copies of the image, or there is one decoded (or in progress) copy of the image.
In the first case we will request decode at the instrinsic size, and then if we draw at a different size that draw will request the proper size. This doesn't change with this patch.
In the second case there is a decoded copy already available, this is likely from a draw call on the image, and that is the surface size that we want. So we save a decode. If we are actually drawing the image at two different sizes the second size will be slightly delayed, but we have the wrongly sized copy of the image that we can draw until then. This seems like a good tradeoff to avoid always decoding an instrinic size copy of images.
By delegating responsibility for shared surfaces reporting to imagelib,
we can cross reference the GPU shared surfaces cache with the local
surface cache in a content process (or the main process). This will
allow us to identify entries that are in the GPU cache but not in the
content/main process cache, and aid in debugging memory leaks. This
functionality is pref'd off by default behind image.mem.debug-reporting.
Additionally, we want to report every entry that was mapped into the
compositor process, in the compositor process memory report. This will
give us a sense of how much of our resident memory is consumed by mapped
images in absence of the more detailed cross referencing above.
At present, surface providers roll up all of their individual surfaces
into a single reporting unit. Specifically this means animated image
frames are all reported as a block. This patch removes that
consolidation and reports every frame as its own SurfaceMemoryReport.
This is important because each frame may have its own external image ID,
and we want to cross reference that with what we expect from the GPU
shared surfaces cache.
By delegating responsibility for shared surfaces reporting to imagelib,
we can cross reference the GPU shared surfaces cache with the local
surface cache in a content process (or the main process). This will
allow us to identify entries that are in the GPU cache but not in the
content/main process cache, and aid in debugging memory leaks. This
functionality is pref'd off by default behind image.mem.debug-reporting.
Additionally, we want to report every entry that was mapped into the
compositor process, in the compositor process memory report. This will
give us a sense of how much of our resident memory is consumed by mapped
images in absence of the more detailed cross referencing above.
At present, surface providers roll up all of their individual surfaces
into a single reporting unit. Specifically this means animated image
frames are all reported as a block. This patch removes that
consolidation and reports every frame as its own SurfaceMemoryReport.
This is important because each frame may have its own external image ID,
and we want to cross reference that with what we expect from the GPU
shared surfaces cache.
If FLAG_HIGH_QUALITY_SCALING is used, we should use
SurfaceCache::LookupBestMatch just like how it is done in RasterImage.
This may provide an alternative size at which we should rasterize the
SVG instead of the requested size. Since SurfaceCache imposes a maximum
size for which it will permit rasterized SVGs, we should also bypass the
cache entirely if we are well above that and simply draw directly to the
draw target in such cases.
With WebRender, it is somewhat more complicated. We will now return
NOT_SUPPORTED if the size is too big, and this should trigger fallback
to blob images. This should only produce drawing commands for the
relevant region and save us the high cost of rasterized a very large
surface on the main thread, which at the same time, looking as crisp as
a user would expect.
There is one main difference between raster images and vector images
with respect to factor of 2 scaling. Vector images may be scaled
infinitely and so we need to extend factor of 2 scaling to permit
growing instead of just shrinking. Also, we don't want to scale
infinitely, so we should configure a maximum size limit. This size limit
will apply even outside of factor of 2 scaling, and so the caller
(VectorImage) will need to be careful to take this into account.
If FLAG_HIGH_QUALITY_SCALING is used, we should use
SurfaceCache::LookupBestMatch just like how it is done in RasterImage.
This may provide an alternative size at which we should rasterize the
SVG instead of the requested size. Since SurfaceCache imposes a maximum
size for which it will permit rasterized SVGs, we should also bypass the
cache entirely if we are well above that and simply draw directly to the
draw target in such cases.
With WebRender, it is somewhat more complicated. We will now return
NOT_SUPPORTED if the size is too big, and this should trigger fallback
to blob images. This should only produce drawing commands for the
relevant region and save us the high cost of rasterized a very large
surface on the main thread, which at the same time, looking as crisp as
a user would expect.