libpng uses the first IDAT chunk it encounters as a signal that it has read all header chunks and to send the info callback.
The testcase png has an IDAT chunk, then a z chunk (not a known chunk type), and then another IDAT chunk.
libpng tracks if we are in an "after idat" state, and throws a benign error if it encounters another IDAT chunk in "after idat" mode, but it just continues normally, processing the idat chunk as if it were the first and therefore sends the info callback again. This seems silly.
https://searchfox.org/mozilla-central/rev/f1c7ba91fad60bfea184006f3728dd6ac48c8e56/media/libpng/pngpread.c#307
In the original Windows clipboard BMP decoder implementation in
nsImageFromClipboard::ConvertColorBitMap, if the bitmap used bitfields
compression, it always adjusted the offset to the RGB data by 12 bytes.
It did this even for newer BMP header formats which explicitly include
space for the bitfields in their header sizes. This patch updates our
BMP decoder to do the same for clipboard BMPs, since we have observed
pasted BMPs using bitfield compression appearing incorrectly. To the
user this appears as if we read a color mask; completely red, blue,
green pixels at the start of the last row, causing all of the other rows
to start with the last three pixels of the previous row.
Differential Revision: https://phabricator.services.mozilla.com/D19955
Replacing js and text occurences of asyncOpen2
Replacing open2 with open
Differential Revision: https://phabricator.services.mozilla.com/D16885
--HG--
rename : layout/style/test/test_asyncopen2.html => layout/style/test/test_asyncopen.html
extra : moz-landing-system : lando
SchemeIs only throws exceptions on null arguments now. Assert
arguments, as they should never be null anyways, and create an
infallible C++ version.
Differential Revision: https://phabricator.services.mozilla.com/D16143
--HG--
extra : moz-landing-system : lando
- modify line wrap up to 80 chars; (tw=80)
- modify size of tab to 2 chars everywhere; (sts=2, sw=2)
--HG--
extra : rebase_source : 7eedce0311b340c9a5a1265dc42d3121cc0f32a0
extra : amend_source : 9cb4ffdd5005f5c4c14172390dd00b04b2066cd7
There were two unrelated buffering problems in nsWebPDecoder. The first
was with the decoder contract. We are expected to loop until the
iterator is unable to provide more data, and wait for the SourceBuffer
to reschedule us, where as nsWebPDecoder::DoDecode only did one pass.
Thus when something yielded wanting more data, we would just wait
forever.
The second was the integration with the libwebp API. We are expected to
retry when we receive SUSPENDED from the decoder, as it decided to yield
pixels instead of continuing to decode as many as possible.
The tests did not cover the first problem because multi chunk decoder
tests do not use SourceBuffer scheduling. This is an oversight. They now
will write a chunk of data, let the SourceBuffer reschedule the decoder,
and repeat until all of the data has been written.
The tests did not cover the second problem because all of the reference
WebP images are too small. This patch adds a new test with a large WebP
image (converted from a Mozilla all hands photo of lanyards). This
should actually trigger the SUSPEND behaviour of libwebp.
Differential Revision: https://phabricator.services.mozilla.com/D10817
For decoders which produce unpaletted partial frames (APNG, WebP), the
surface format should always be BGRA. These frames while partial, are
the same size as the output size of the animated image. When
FrameAnimator performs the blend with the compositing frame, it expects
all pixels we don't care about to be set to fully transparent. If it is
BGRX, they will be set to solid white instead.
Differential Revision: https://phabricator.services.mozilla.com/D10753
First we did not handle the SourceBufferIterator::WAITING state which
can happen when we get woken up but there is no data to read from the
SourceBufferIterator. StreamingLexer handled this properly by yielding
with NEED_MORE_DATA, and properly scheduling the decoder to resume. This
patch does the same in the WebP decoder.
Second nsWebPDecoder::GetType was not implemented. This meant it would
return DecoderType::UNKNOWN, and would fail to recreate the decoder if
we are discarding frames and need to restart from the beginning. In
addition to implementing that method, this patch also corrects an assert
in DecoderFactory::CloneAnimationDecoder which failed to check for WebP
as a supported animated decoder.
This patch also modestly improves the logging output and library method
checks.
Differential Revision: https://phabricator.services.mozilla.com/D10624
Calls to do_QueryInterface to a base class can be replaced by a static
cast, which is faster.
Differential Revision: https://phabricator.services.mozilla.com/D7224
--HG--
extra : moz-landing-system : lando
If class A is derived from class B, then an instance of class A can be
converted to B via a static cast, so a slower QI is not needed.
Differential Revision: https://phabricator.services.mozilla.com/D6861
--HG--
extra : moz-landing-system : lando
DecoderFlags::BLEND_ANIMATION will cause the decoder to inject the
BlendAnimationFilter from the previous patch into the SurfacePipe filter
chain. All frames produced by this decoder will be complete, and
should be equivalent to the result outputted by FrameAnimator.
DecoderFlags::BLEND_ANIMATION will cause the decoder to inject the
BlendAnimationFilter from the previous patch into the SurfacePipe filter
chain. All frames produced by this decoder will be complete, and
should be equivalent to the result outputted by FrameAnimator.
There are surprisingly many of them.
(Plus a couple of unnecessary checks after `new` calls that were nearby.)
--HG--
extra : rebase_source : 47b6d5d7c5c99b1b50b396daf7a3b67abfd74fc1
The patch introduces NS_GetURIWithNewRef and NS_GetURIWithNewRef which perform the same function.
Differential Revision: https://phabricator.services.mozilla.com/D2239
--HG--
extra : moz-landing-system : lando
The JPEG decoder will currently only post an invalidation when it has
processed all of the rows it is able to. If it is has all the data, that
means it must fully decode before invalidating. This causes very large
JPEGs to appear in large chunks which feels janky compared to slowly
appearing row by row with the refresh tick. With WebRender, it also
allows us to upload less data per frame update which can be another
source of jank.
This patch is an automatic replacement of s/NS_NOTREACHED/MOZ_ASSERT_UNREACHABLE/. Reindenting long lines and whitespace fixups follow in patch 6b.
MozReview-Commit-ID: 5UQVHElSpCr
--HG--
extra : rebase_source : 4c1b2fc32b269342f07639266b64941e2270e9c4
extra : source : 907543f6eae716f23a6de52b1ffb1c82908d158a
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
nsGIFDecoder2::YieldPixel is sufficiently complex that the optimizer
does not appear to inline it with the rest of the templated methods. As
such there is a high cost to calling it. This patch modifies it to yield
a requested number of pixels before exiting, allowing us to amortize the
cost of calling across a row instead of a pixel. Based on profiling,
this will significantly reduce the time require to decode a frame.
NullPrincipal::Create() (will null OA) may cause an OriginAttributes bypass.
We change Create() so OriginAttributes is no longer optional, and rename
Create() with no arguments to make it more explicit about what the caller is doing.
MozReview-Commit-ID: 7DQGlgh1tgJ
* Deserialization now only happens via a mutator
* The CID for URI implementations actually returns the nsIURIMutator for each class
* The QueryInterface of mutators implementing nsISerializable will now act as a finalizer if passed the IID of an interface implemented by the URI it holds
MozReview-Commit-ID: H5MUJOEkpia
--HG--
extra : rebase_source : 01c8d16f7d31977eda6ca061e7889cedbf6940c2
* Deserialization now only happens via a mutator
* The CID for URI implementations actually returns the nsIURIMutator for each class
* The QueryInterface of mutators implementing nsISerializable will now act as a finalizer if passed the IID of an interface implemented by the URI it holds
MozReview-Commit-ID: H5MUJOEkpia
--HG--
extra : rebase_source : 8ebb459445cab23288a6c4c86e4e00c6ee611e34
Later in the patch series, we use the new APIs to facilitate cloning of
an existing decoder. This is useful when you want to redecode the same
image with the exact same configuration but from the very beginning.
Originally we attempted to finalize the current frame from the contained
decoder in nsICODecoder::FinishResource. This is wrong because we
haven't acquired the frame from the contained decoder yet. This happens
in nsICODecoder::GetFinalStateFromContainedDecoder, and so
imgFrame::Finalize call should be moved there. This was causing us to
use fallback image sharing with WebRender after a GPU process crash,
instead of shared surfaces, because it can't get a new file handle for
the surface data until we have finished writing all of the image data.
* changes call to use nsIURIMutator.setSpec()
* Add new NS_MutateURI constructor that takes new Mutator object
* Make nsSimpleNestedURI::Mutate() and nsNestedAboutURI::Mutate() return mutable URIs
* Make the finalizers for nsSimpleNestedURI and nsNestedAboutURI make the returned URIs immutable
MozReview-Commit-ID: 1kcv6zMxnv7
--HG--
extra : rebase_source : 99b13e9dbc8eaaa9615843b05e1539e19b527504
If we aren't using a downscaler we avoid this bug because the mask is either 100% transparent or 100% opaque, and in the transparent case we just set the whole pixel (32 bits) to 0.
But when we are using a downscaler we just replace the alpha values in the original surface (leaving the color values untouched).
We need to go the full premultiply route because after downscaling the mask we can have any value for alpha instead of just 0 or 255.
This also changes URIUtils.cpp:DeserializeURI() to use the mutator to instantiate new URIs, instead of using their default constructor.
MozReview-Commit-ID: JQOvIquuQAP
--HG--
extra : rebase_source : e146624c5ae423f7f69a738aaaafaa55dd0940d9
This is straightforward, with only two notable things.
- `#include "nsXPIDLString.h" is replaced with `#include "nsString.h"`
throughout, because all nsXPIDLString.h did was include nsString.h. The
exception is for files which already include nsString.h, in which case the
patch just removes the nsXPIDLString.h inclusion.
- The patch removes the |xpidl_string| gtest, but improves the |voided| test to
cover some of its ground, e.g. testing Adopt(nullptr).
--HG--
extra : rebase_source : 452cc4a08046a1adb1a8099a7e85a1917de5add8
These are all simple cases, with similarities to previous patches in this
series.
--HG--
extra : rebase_source : 6ef36382df9fef217d5cb737e218d65ac062f90a
StreamingLexer::Clone should always succeed because we are merely
creating a new SourceBufferIterator which is at the same position as the
given iterator. However it is possible if there is no more data after,
the current position, it could return COMPLETE instead of READY.
This should not happen during the first Advance loop however. We handle
the failure gracefully now, and if someone files a report with the
invalid ICO file causing this problem, then we can investigate further.
- Use displayPrePath in the pageInfo permissions that shows "Permissions for:"
- The extra displayPrePath method is necessary because it's difficult to compute it manually, as opposed to not having a displaySpecWithoutRef - as it's easy to get that by truncating displaySpec at the first '#' symbol.
MozReview-Commit-ID: 9RM5kQ2OqfC
nsIURI.originCharset had two use cases:
1) Dealing with the spec-incompliant feature of escapes in the hash
(reference) part of the URL.
2) For UI display of non-UTF-8 URLs.
For hash part handling, we use the document charset instead. For pretty
display of query strings on legacy-encoded pages, we no longer care to them
(see bug 817374 comment 18).
Also, the URL Standard has no concept of "origin charset". This patch
removes nsIURI.originCharset for reducing complexity and spec compliance.
MozReview-Commit-ID: 3tHd0VCWSqF
--HG--
extra : rebase_source : b2caa01f75e5dd26078a7679fd7caa319a65af14
* nsStandardURL::GetHost/GetHostPort/GetSpec contain an punycode encoded hostname.
* Added nsIURI::GetDisplayHost/GetDisplayHostPort/GetDisplaySpec which have unicode hostnames, depending on the hostname, character blacklist and the network.IDN_show_punycode pref
* remove mHostEncoding since it's not needed anymore (the hostname is always ASCII encoded)
* Add mCheckedIfHostA to know when GetDisplayHost can return the regular host, or when we need to use the cached mDisplayHost
MozReview-Commit-ID: 4qV9Ynhr2Jl
* * *
Bug 945240 - Make sure nsIURI.specIgnoringRef/.getSensitiveInfoHiddenSpec/.prePath contain unicode hosts when network.standard-url.punycode-host is set to false r=mcmanus
MozReview-Commit-ID: F6bZuHOWEsj
--HG--
extra : rebase_source : d8ae8bf774eb22b549370ca96565bafc930faf51
It's silly to use prmem.h within Firefox code given that in our configuration
its functions are just wrappers for malloc() et al. (Indeed, in some places we
mix PR_Malloc() with free(), or malloc() with PR_Free().)
This patch removes all uses, except for the places where we need to use
PR_Free() to free something allocated by another NSPR function; in those cases
I've added a comment explaining which function did the allocation.
--HG--
extra : rebase_source : 0f781bca68b5bf3c4c191e09e277dfc8becffa09
If we were doing a first frame only decode we wouldn't fill in this value. The spec says this chunk must come before any image data so it should always be available at the end of any full decode (whether it be truly full or first frame only).
Supports creating a windowless browser on Linux without an X server. Most of the
changes are just adding branches to avoid calls in to GTK which calls
into X. Some of the bigger additions were adding a separate headless widget
which implements just enough to render a page. A headless look and
feel were also added since there are many calls into GTK in the platform
specific one.
Since we drop XP support, it is unnecessary to use SHGetStockIconInfo via LoadLibrary.
MozReview-Commit-ID: 4lvhVObHv5U
--HG--
extra : rebase_source : 04ac6f97e6a3eff7c52e11e3868da0939efd6ffe