This picks up an existing fix for variant keys, which we started
using in gecko. The lack of this fix broke l10n-merge, and thus
l10n nightlies.
MozReview-Commit-ID: Gy6U52rc6PH
--HG--
extra : amend_source : 8c5e29e877a24558b96dfa18ec5a0ae05a466b93
Websites which collect passwords but don't use HTTPS start showing scary
warnings from Firefox 51 onwards and mixed context blocking has been
available even longer.
.onion sites without HTTPS support are affected as well, although their
traffic is encrypted and authenticated. This patch addresses this
shortcoming by making sure .onion sites are treated as potentially
trustworthy origins.
The secure context specification
(https://w3c.github.io/webappsec-secure-contexts/) is pretty much focused
on tying security and trustworthiness to the protocol over which domains
are accessed. However, it is not obvious why .onion sites should not be
treated as potentially trustworthy given:
"A potentially trustworthy origin is one which a user agent can
generally trust as delivering data securely.
This algorithms [sic] considers certain hosts, scheme, and origins as
potentially trustworthy, even though they might not be authenticated and
encrypted in the traditional sense."
(https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy)
We use step 8 in the algorithm to establish trustworthiness of .onion
sites by whitelisting them given the encrypted and authenticated nature
of their traffic.
With the previous parts, for large animated images, we will now discard
previous frames after we reach the threshold. This mochitest configures
a very low threshold, such that it will trigger on a small animated
image. It then verifies that we are already to loop the animation a
couple of times.
When we need to recreate an animated image decoder because it was
discarded, the animation may have progressed beyond the first frame.
Given that later in the patch series we need FrameAnimator to be driving
the decoding more actively, it simplifies its role by making it assume
the initial state of the decoder matches its initial state. Passing in
the currently displayed frame allows the decoder to advance its frame
buffer (and potentially discard unnecessary frames), such that when the
animation actually wants to advance as it normally would, the decoder
state matches what it would have been if it had never been discarded.
Note that AnimationSurfaceProvider will override these methods to give a
proper implementation in a later patch in this series. For now, they are
mostly stubbed, using the default implementation from ISurfaceProvider.
They focus on the main operations we perform on an animation:
1) Progressing through the animation, e.g. advancing a frame. If we
don't decode the whole animation up front, we need to know at the
decoder level where we are in the display of the animation.
2) Restarting an animation from the beginning. This is a specialized
case of the above, where we want to skip explicitly advancing through
the remaining frames and instead restart at the beginning. The decoder
may have already discarded the earliest frames and must start redecoding
them.
3) Knowing whether or not the decoder is still active, e.g. can we be
missing frames.
image.animated.decode-on-demand.threshold-kb is the maximum size in kB
that the aggregate frames of an animation can use before it starts to
discard already displayed frames, and redecode them as necessary. The
lower it is set to, the less overall memory we will consume at the
expense of execution time for as long as the tab with the animation(s)
above the threshold are kept open.
image.animated.decode-on-demand.batch-size is the minimum number of
frames we want to have buffered ahead of an animation's currently
displayed frame. The decoding will request this number of frames at a
time to maximize use of memory caching. Note that this is related to the
above preference as well; increasing the batch size will in effect raise
what the minimum threshold. This simplifies the logic in patches later
in the series.
Later in the patch series, we use the new APIs to facilitate cloning of
an existing decoder. This is useful when you want to redecode the same
image with the exact same configuration but from the very beginning.