Replacing js and text occurences of asyncOpen2
Replacing open2 with open
Differential Revision: https://phabricator.services.mozilla.com/D16885
--HG--
rename : layout/style/test/test_asyncopen2.html => layout/style/test/test_asyncopen.html
extra : moz-landing-system : lando
***
Bug 1514594: Part 3a - Change ChromeUtils.import to return an exports object; not pollute global. r=mccr8
This changes the behavior of ChromeUtils.import() to return an exports object,
rather than a module global, in all cases except when `null` is passed as a
second argument, and changes the default behavior not to pollute the global
scope with the module's exports. Thus, the following code written for the old
model:
ChromeUtils.import("resource://gre/modules/Services.jsm");
is approximately the same as the following, in the new model:
var {Services} = ChromeUtils.import("resource://gre/modules/Services.jsm");
Since the two behaviors are mutually incompatible, this patch will land with a
scripted rewrite to update all existing callers to use the new model rather
than the old.
***
Bug 1514594: Part 3b - Mass rewrite all JS code to use the new ChromeUtils.import API. rs=Gijs
This was done using the followng script:
https://bitbucket.org/kmaglione/m-c-rewrites/src/tip/processors/cu-import-exports.jsm
***
Bug 1514594: Part 3c - Update ESLint plugin for ChromeUtils.import API changes. r=Standard8
Differential Revision: https://phabricator.services.mozilla.com/D16747
***
Bug 1514594: Part 3d - Remove/fix hundreds of duplicate imports from sync tests. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16748
***
Bug 1514594: Part 3e - Remove no-op ChromeUtils.import() calls. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16749
***
Bug 1514594: Part 3f.1 - Cleanup various test corner cases after mass rewrite. r=Gijs
***
Bug 1514594: Part 3f.2 - Cleanup various non-test corner cases after mass rewrite. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16750
--HG--
extra : rebase_source : 359574ee3064c90f33bf36c2ebe3159a24cc8895
extra : histedit_source : b93c8f42808b1599f9122d7842d2c0b3e656a594%2C64a3a4e3359dc889e2ab2b49461bab9e27fc10a7
Given the crash resolved in part 1, it is possible for the blob
rasterizer in the compositor process to still be using surfaces after
the animation has advanced to the next frame. With recycling this can be
problematic as the recycled surface will be reused for a future frame.
In an ideal world, the blob recording would use the animation's image
key instead, but the rasterizer doesn't have easy access to the mapping
table. As such, for any frames used in a blob recording, we now
explicitly mark them as non-recyclable and we will be forced to allocate
a new frame instead.
Differential Revision: https://phabricator.services.mozilla.com/D16192
When an animation is reset, we can actually avoid the reallocations by
keeping the frames in the mRecycle and mDiscard queues. There is no real
need to do this. We just need to make sure the recycle rect calculation
is done appropriately, particularly when we haven't reached the end yet
and thus don't know the first frame refresh area yet.
Differential Revision: https://phabricator.services.mozilla.com/D13465
The size was originally incremented in AnimationFrameBuffer::Insert
however if an animation was reset before we finished decoding, it would
count some frames twice in the counter. Now we increment it inside
InsertInternal, where AnimationFrameDiscardingQueue can make a more
informed decision on whether the frame is a duplicate or not.
Additionally we now fail explicitly when we insert more frames on
subsequent decodes than the original decoders. This will help avoid
getting out of sync with FrameAnimator.
Differential Revision: https://phabricator.services.mozilla.com/D13464
If an animated frame buffer was reset just before the necessary frame to
cross the discard threshold, followed by said frame being inserted by
the decoder, it would insert a null pointer into the display queue for
the first frame. This is because it assumed that we have always advanced
past the first frame -- which was true, but the reset placed us back at
the beginning.
This would initially manifest to the user as the animation stopping,
since it could not advance past the first frame. Once a memory report
was requested, it would crash because we assume every frame in the
display queue is valid.
This patch removes the assumption about what frame we have advanced to.
Differential Revision: https://phabricator.services.mozilla.com/D13407
When bug 1508393 landed, it not only enabled producing of full frames
on the decoder threads, it also enabled recycling of old animated image
frame buffers it would have otherwise discarded. It reuses the contents
of the buffer where possible given we know what pixels changed between
the old frame and the frame we want to produce. However where this
calculation was done was incorrect. We originally calculated it when we
advanced the frame, but at that point there is no guarantee that we have
all of the necessary information; we may have fallen behind on decoding.
As such, we move the calculation to where we actually perform the
recycling. At this point we are guaranteed to have all the necessary
frames between the recycling and display queues.
Differential Revision: https://phabricator.services.mozilla.com/D12903
WebRender takes longer than OMTP to release its hold on the current
frame. This is because it is in a separate process and holds onto the
surface in between rendering frames, rather than getting a reference for
each repaint. This patch makes us less aggressive about taking the most
recent surface placed in the recycling queue out to avoid blocking on
waiting for the surface to be released.
Differential Revision: https://phabricator.services.mozilla.com/D10903
Bug 1249474 suggested that we add image/webp to the front of the Accept
header for images, to indicate to servers that we actually support WebP.
Differential Revision: https://phabricator.services.mozilla.com/D8120
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
--HG--
extra : source : 803b224d52a0940b4fb4b3b9cffc6a1fa6e5d4ee
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
There were two unrelated buffering problems in nsWebPDecoder. The first
was with the decoder contract. We are expected to loop until the
iterator is unable to provide more data, and wait for the SourceBuffer
to reschedule us, where as nsWebPDecoder::DoDecode only did one pass.
Thus when something yielded wanting more data, we would just wait
forever.
The second was the integration with the libwebp API. We are expected to
retry when we receive SUSPENDED from the decoder, as it decided to yield
pixels instead of continuing to decode as many as possible.
The tests did not cover the first problem because multi chunk decoder
tests do not use SourceBuffer scheduling. This is an oversight. They now
will write a chunk of data, let the SourceBuffer reschedule the decoder,
and repeat until all of the data has been written.
The tests did not cover the second problem because all of the reference
WebP images are too small. This patch adds a new test with a large WebP
image (converted from a Mozilla all hands photo of lanyards). This
should actually trigger the SUSPEND behaviour of libwebp.
Differential Revision: https://phabricator.services.mozilla.com/D10817
For decoders which produce unpaletted partial frames (APNG, WebP), the
surface format should always be BGRA. These frames while partial, are
the same size as the output size of the animated image. When
FrameAnimator performs the blend with the compositing frame, it expects
all pixels we don't care about to be set to fully transparent. If it is
BGRX, they will be set to solid white instead.
Differential Revision: https://phabricator.services.mozilla.com/D10753
First we did not handle the SourceBufferIterator::WAITING state which
can happen when we get woken up but there is no data to read from the
SourceBufferIterator. StreamingLexer handled this properly by yielding
with NEED_MORE_DATA, and properly scheduling the decoder to resume. This
patch does the same in the WebP decoder.
Second nsWebPDecoder::GetType was not implemented. This meant it would
return DecoderType::UNKNOWN, and would fail to recreate the decoder if
we are discarding frames and need to restart from the beginning. In
addition to implementing that method, this patch also corrects an assert
in DecoderFactory::CloneAnimationDecoder which failed to check for WebP
as a supported animated decoder.
This patch also modestly improves the logging output and library method
checks.
Differential Revision: https://phabricator.services.mozilla.com/D10624
In the next few patches, AnimationFrameBuffer will be reworked to split
the discarding behaviour from the retaining behaviour. Once implemented
as separate classes, it will allow easier reuse of the discarding code
for recycling.
Differential Revision: https://phabricator.services.mozilla.com/D7513
When blending full frames off the main thread, FrameAnimator no longer
requires access to the raw data of the frame to advance the animation.
Now we only request a RawAccessFrameRef for the current/next frames when
we have discovered that we need to do blending on the main thread.
In addition to avoiding the mutex overhead of RawAccessFrameRef, this
will also facilitate potentially optimizing the surfaces for the
DrawTarget for individual animated image frames.
Differential Revision: https://phabricator.services.mozilla.com/D7506
When generating display lists for WebRender, we were not caching the
draw result via nsDisplayItemGenericImageGeometry::UpdateDrawResult (or
similar) after completing CreateWebRenderCommands. This is important
because reftests use this to force sync decoding for images; it may be a
reason for image-related intermittent failures on *-qr builds.
Additionally, we may have been requesting fallback in cases where fallback
could not do anything more than WebRender could. For example, if we can't
get an image container yet, there is no point in requesting fallback
because it might just be we haven't started decoding yet. We should just
return the actual draw result in such cases.
In addition to the image container, the draw result can also be useful
for callers to know whether or not the surface(s) in the container are
fully decoded or not. This is used in subsequent parts to avoid
flickering in some cases.