Factory::DoesBackendSupportDataDrawtarget already fulfills the same
purpose and we should use that instead, as imgFrame is the only user of
the former API. It has the added bonus of allowing us to use shared
surfaces on Linux with WebRender, and using volatile surfaces on Windows
when D2D is disabled.
The "current URL" in the spec:
https://html.spec.whatwg.org/multipage/embedded-content.html#dom-img-currentsrc
maps to imgIRequest.URI, not currentURI.
Rename imgIRequest.currentURI to finalURI to prevent such confusion.
MozReview-Commit-ID: CjBh2V4z8K9
--HG--
extra : rebase_source : 01277d16ef12845e12cc846f9dd4a21ceeca283b
This also changes URIUtils.cpp:DeserializeURI() to use the mutator to instantiate new URIs, instead of using their default constructor.
MozReview-Commit-ID: JQOvIquuQAP
--HG--
extra : rebase_source : e146624c5ae423f7f69a738aaaafaa55dd0940d9
The "current URL" in the spec:
https://html.spec.whatwg.org/multipage/embedded-content.html#dom-img-currentsrc
maps to imgIRequest.URI, not currentURI.
Rename imgIRequest.currentURI to finalURI to prevent such confusion.
MozReview-Commit-ID: CjBh2V4z8K9
--HG--
extra : rebase_source : d3047aed22f116ff9a74099b646a84e597388673
This is important because it ensures we release the shared memory handle
(although not the data itself) for the underlying surface buffer when it
turns out we will probably never need to share it. If we do need to
share the surface data with the GPU process, it will reallocate a handle
if necessary, and close it when it is finished. On some platforms we
only have a finite number of handles, so if we don't need them, we
should close them.
This is largely trivial because the meat of the implementation is
located in ImageResource and we already added GetFrameInternal.
Interestingly VectorImage::IsUnlocked does not actually check if the
image is locked, but instead only checks for animation consumers. This
is consistent with its historical behavior on when to issue an unlocked
draw event.
Note that we do not implement the original GetImageContainer and
IsImageContainerAvailable APIs. This is because the former does not
accept an SVG context and it would be best to discourage its use in old
code lest we get incorrect/unexpected results.
No functional change aside from the implementation from
VectorImage::GetFrameAtSize being repurposed for GetFrameInternal and
returning an additional error code with the surface.
Creating a DrawTarget can be an expensive operation. This is especially
true in this case because checking for a cached already decoded version
of the VectorImage is expected to be fast. Currently VectorImage::Draw
is the typical path to render these images, but in the future, getting
the frames directly or indirectly (through an ImageContainer) will
become more common.
When FLAG_HIGH_QUALITY_SCALING is used, we need to make sure we continue
using that flag when we update the container. We should also use it for
comparing whether or not an existing image container is equivalent.
This adds IsImageContainerAvailableAtSize and GetImageContainerAtSize to
the imgIContainer interface, as well as stubbing it for all of the
classes which implement it. The real implementations will follow for the
more complicated classes (RasterImage, VectorImage).
Exposure of this functionality comes in a later patch in the set.
Experimental testing with WebRender and image layers enabled suggests
most of the time we are not using more than one image container per
image, hence why mImageContainers has room for one container without a
malloc.
RasterImage::GetCurrentImage can only return a subset of the DrawResult
values, and the original RasterImage::GetImageContainer implementation
relied upon this behavior. Now we handle them all to ensure that when
other image implementations reuse it, they may return any valid
DrawResult and get the expected results.
As part of the move, we add a IntSize parameter to
ImageResource::GetCurrentImage. This is because we don't have access to
the image's size (yet) from ImageResource, but additionally because we
will need this anyways when we support multiple image containers at
different sizes.
The only change to the moved implementation is that we no longer have
access to RasterImage::mHasSize and RasterImage::mSize. Thus we rely
upon imgIContainer::IsImageContainerAvailable to perform these checks.
This state will eventually be used by VectorImage when it supports image
containers. For now, it is harmless beyond using slightly more memory
for SVGs.
An imgRequestProxy may defer notifications when it needs to block on an
imgCacheValidator. It may also be cancelled before the validator has
completed its operation, but before this change, we did not remove the
request from the set of proxies, imgCacheValidator::mProxies. When the
deferral was completed, it would assert to ensure each proxy was still
expecting a deferral before issuing the notifications. Cancelling a
request can actually reset that state, which means we fail the assert.
Failing the assert is actually harmless; in release we suffer no
negative consequences as a result of this sequence of events. Now we
just remove the proxy from the validator set to avoid asserting.
The core of this change is in gfxContext.*:
- change gfxContext::CurrentMatrix() and gfxContext::SetMatrix() to
return and take a Matrix respectively, instead of converting to
and from a gfxMatrix (which uses doubles). These functions therefore
will now match the native representation of the transform in gfxContext.
- add two new functions CurrentMatrixDouble() and SetMatrixDouble() that
do what the old CurrentMatrix() and SetMatrix() used to do, i.e.
convert between the float matrix and the double matrix.
The rest of the change is just updating the call sites to avoid round-
tripping between floats and doubles where possible. Call sites that are
hard to fix are migrated to the new XXXDouble functions which preserves
the existing behaviour.
MozReview-Commit-ID: 5sbBpLUus3U
imgRequestProxy::CancelAndForgetObserver was intended to always dispatch
any load group removals due to reentracy conflicts with the callers.
However in bug 1404422 the fact that imgRequest::RemoveProxy can
indirectly trigger a load group removal through completing an
incompleted request.
Historically imgRequestProxy::PerformClone would only add the cloned
request to the (original proxy's) document's load group if the request
was still being validated. Now it adds the cloned request to the given
document's load group before requesting the notifications, unless the
request has already been completed. We ensure that any removals from
the load group occur outside the current execution context.
Legacy listeners may use imgRequestProxy::SyncClone to request
notifications on the image state. Ideally they would not, but they do
not work as expected with the asynchronous notifications all new callers
must use. While in theory this would suggest their code is re-entrant,
not all of it is. In particular we need to be sensitive about when we
remove a request from a load group.
There should be no functional change here, but we rely upon the new
structure in the next patch in the series. This separates out the
notions of removing a request from the load group (which is always
final, and must be executed outside of synchronous calls from the owner
of the imgRequestProxy) and wanting to readd a request to the load group
as a background request (for multipart images).
The most important addition is mForceDispatchLoadGroup which if true
when imgRequestProxy::RemoveFromLoadGroup is called, will dispatch the
removal from the load group instead of executing it inline. This ensures
safety for any callers (e.g. to CancelAndForgetObserver) as above.
imgRequestProxy::SetLoadGroup did not have a predictable effect and
it appears to be unused. It is somewhat complicated to support given
we must be sensitive about what context we execute removing the
request from the original load group.
imgLoader::LoadImage now asserts in debug builds that the load group
given as a parameter matches that of the given document (if any). If
they mismatch, then we won't be blocking the document's load event as we
expect with the future removal of the imgIOnloadBlocker.
imgLoader::LoadImageWithChannel never actually added the request to the
load group at all, unless it was done as part of the validator. Now it
will consistently add the request to the channel's load group as
expected. Additionally it also asserts in debug builds that the
channel's load group matches that of the given document, as in
LoadImage.
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive. This patch splits that macro into three new ones that are harder to
mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 3e2bbec4737b696e1c86579ae54be4cb3186c100
Most cases where the pointer is stored into an already-declared variable can
trivially be changed to MakeNotNull<T*>, as the NotNull raw pointer will end
up in a smart pointer.
In RAII cases, the target type can be specified (e.g.:
`MakeNotNull<RefPtr<imgFrame>>)`), in which case the variable type may just be
`auto`, similar to the common use of MakeUnique.
Except when the target type is a base pointer, in which case it must be
specified in the declaration.
MozReview-Commit-ID: BYaSsvMhiDi
--HG--
extra : rebase_source : 8fe6f2aeaff5f515b7af2276c439004fa3a1f3ab
We're currently fairly vague and inconsistent about the values we provide to
content policy implementations for requestOrigin and requestPrincipal. In some
cases they're the triggering principal, sometimes the loading principal,
sometimes the channel principal.
Our existing content policy implementations which require or expect a loading
principal currently retrieve it from the context node. Since no current
callers require the principal to be the loading principal, and some already
expect it to be the triggering principal (which there's currently no other way
to retrieve), I chose to pass the triggering principal whenever possible, but
use the loading principal to determine the origin URL.
As a follow-up, I'd like to change the nsIContentPolicy interface to
explicitly receive loading and triggering principals, or possibly just
LoadInfo instances, rather than poorly-defined request
origin/principal/context args. But since that may cause trouble for
comm-central, I'd rather not do it as part of this bug.
MozReview-Commit-ID: LqD9GxdzMte
--HG--
extra : rebase_source : 41ce439912ae7b895e0a3b0e660fa6ba571eb50f
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 53c8b43b6a1be06d00618a133e28bf95c46a3ba3
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 59f77df0124249bfd11fee3585420a17b4201d37
The imgLoader code consistently uses the term 'loadingPrincipal' for the
principal that is called the triggeringPrincipal everywhere else it's used.
This is confusing, and since we need to make changes to how those values are
determined, it should be fixed beforehand.
MozReview-Commit-ID: 8CTHwayzcaD
--HG--
extra : rebase_source : d4405b0ecfe1c8dfb9bfdf61fe6ed6cfb180ba83
Currently the Gecko Profiler defines a moderate amount of stuff when
MOZ_GECKO_PROFILER is undefined. It also #includes various headers, including
JS ones. This is making it difficult to separate Gecko's media stack for
inclusion in Servo.
This patch greatly simplifies how things are exposed. The starting point is:
- GeckoProfiler.h can be #included unconditionally;
- everything else from the profiler must be guarded by MOZ_GECKO_PROFILER.
In practice this introduces way too many #ifdefs, so the patch loosens it by
adding no-op macros for a number of the most common operations.
The net result is that #ifdefs and macros are used a bit more, but almost
nothing is exposed in non-MOZ_GECKO_PROFILER builds (including
ProfilerMarkerPayload.h and GeckoProfiler.h), and understanding what is exposed
is much simpler than before.
Note also that in BHR, ThreadStackHelper is now entirely absent in
non-MOZ_GECKO_PROFILER builds.
This patch:
- adds fails-if annotations for all the reftests that were consistently failing
with layers-free turned on.
- removes fails-if or reduces the range on fuzzy-if annotations for all
the reftests that were producing UNEXPECTED-PASS results with
layers-free turned on.
- adds skip-if, random-if, or fuzzy-if annotations to the reftests that
were intermittently failing due to timeout, obvious incorrectness, or
slight pixel differences, respectively.
MozReview-Commit-ID: A0Aknn6rnjj
--HG--
extra : rebase_source : 420d9cf43f23a5d654fa36eec69138937d13c173
Currently we only permit requests from HTTP channels to be retargeted to
the image IO thread. It was implemented this way originally in bug
867755 but it does not appear there was a specific reason for that.
The only kink in this is some browser chrome mochitests listen on debug
build only events to ensure certain chrome images are loaded and/or
drawn. As such, this patch ensures that those observer notifications
continue to be served, requiring a dispatch from the image IO thread to
the main thread.
Another issue to note is that SVGs must be processed on the main thread;
the underlying SVG document can only be accessed from it. We enforce
this by checking the content type. The possibility already exists that
an HTTP response could contain the wrong content type, and in that case,
we fail to decode the image, as there is no content sniffing support for
SVG. Thus there should be no additional risk taken by using the image IO
thread from other non-HTTP channels (if they don't specify the SVG
content type, it is not rendered today, and if they do, it will remain
on the main thread as it is today).
We also ignore data URIs. The specification requires that we process
these images sychronously. See bug 1325080 for details.
There are a number of operations with the surface cache which may result
in individual surfaces for a particular image cache to be removed. If an
image cache is emptied, and we are in factor of 2 mode, we should reset
it to the default mode, because we require at least one surface to be
available to determine the native/ideal size. Additionally, if the cache
is not locked, it should be removed entirely from the surface cache. We
handle this correctly in methods such as Lookup and LookupBestMatch, but
Prune and CollectSizeOfSurfaces can also cause this to happen, as
recently done in bug 1370412 and bug 1380649.
In order to let necko postpone the load of favicon, we have to set request context ID to the http channel that is created to load favicon.
This patch starts with passing a request context ID to nsContentUtils::LoadImage and makes other necessary changes to set the request context ID to the channel.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When the surface cache starts tracking an unlocked surface, it must
insert it into the expiration tracker, so that it can be freed later if
it is remains unused. ExpirationTrackerImpl::AddObjectLocked can fail
due to out-of-memory conditions or during shutdown, which we previously
ignored, and could leave us in a state where we think the surface is in
the tracker but is not. When we later try to mark the surface as used in
the tracker, it will hit a release assert because it doesn't exist. Now
we handle the insertion failure by discarding the surface. Marking the
surface as used can itself encounter a similar issue, and we handle it
the same way.
MozReview-Commit-ID: Kv6l0znnG48
An ImageSurfaceCache cannot enter factor-of-2 mode without a minimum
number of surfaces being present in its cache. However those surfaces
can be purged from the cache through various means (expire due to being
disuse, volatile buffers purged, etc). Also, it is entirely possible
that all the surfaces get purged, but the cache itself remains. Since
factor-of-2 mode requires at least one surface (to get the owning image
and its native size), we need to handle the case when the cache is
emptied appropriately. As such, we now reset the factor-of-2 mode (and
its pruned state) to the default (false) if we transition from non-empty
to empty.
MozReview-Commit-ID: EVaEqW59Asv
When SurfaceCache::Lookup is called to access surface data, it indicates
that the caller will not accept substitutes as in the case of
SurfaceCache::LookupBestMatch. As such, we need to be careful not to
remove those surfaces from our cache when pruning (in part 8b). This is
the marker used to track that, at some point, there was a caller which
got this surface that would accept no other (e.g. factor of 2 mode must
make an accept for this particular surface).
In bug 1383499 we fixed the case where on Android, animated images could
consume all of the available file handles. This is because each volatile
buffer will contain a file handle on Android, and animated images can
contain many, many frames. However in doing so we introduced a bug where
the stride of replacement surface was aligned to a 16-byte boundary. We
do not currently support any stride value but pixel size * image width
in the image decoding framework. This may be something we correct in the
future but for now, we should just ensure all surfaces follow the
expected stride value.
This is straightforward, with only two notable things.
- `#include "nsXPIDLString.h" is replaced with `#include "nsString.h"`
throughout, because all nsXPIDLString.h did was include nsString.h. The
exception is for files which already include nsString.h, in which case the
patch just removes the nsXPIDLString.h inclusion.
- The patch removes the |xpidl_string| gtest, but improves the |voided| test to
cover some of its ground, e.g. testing Adopt(nullptr).
--HG--
extra : rebase_source : 452cc4a08046a1adb1a8099a7e85a1917de5add8
We should not be declaring forward declarations for nsString classes directly,
instead we should use nsStringFwd.h. This will make changing the underlying
types easier.
--HG--
extra : rebase_source : b2c7554e8632f078167ff2f609392e63a136c299
These are all simple cases, with similarities to previous patches in this
series.
--HG--
extra : rebase_source : 6ef36382df9fef217d5cb737e218d65ac062f90a
These are all easy cases where an nsXPIDLCString local variable is set via
getter_Copies() and then is used in ways that rely on the implicit conversion
to |char*|. The patch uses get() and EqualsLiteral() calls to replace the
implicit conversions.
Animated image frames are never actually released to let the OS purge
them, because AnimationSurfaceProvider keeps them as RawAccessFrameRef.
Meanwhile, each volatile buffer on Android needs a file handle to be
retained for as long as the buffer lives. Given sufficient number of
animated image frames, we could easily exhaust the available file
handles (which in turn causes many problems). As such on Android we
should stick with the heap for animated image frames. On other platforms
it is better to stay because we will avoid a memset, because the OS will
zero-fill the requested pages on our behalf.
StreamingLexer::Clone should always succeed because we are merely
creating a new SourceBufferIterator which is at the same position as the
given iterator. However it is possible if there is no more data after,
the current position, it could return COMPLETE instead of READY.
This should not happen during the first Advance loop however. We handle
the failure gracefully now, and if someone files a report with the
invalid ICO file causing this problem, then we can investigate further.
This patch does the following.
- Moves nsWindowSizes from nsWindowMemoryReporter.h to its own file,
nsWindowSizes.h, so it can be included more widely without exposing
nsWindowMemoryReporter.
- Merges nsArenaMemoryStats.h (which defines nsTabSizes and nsArenaMemoryStats)
into nsWindowSizes.h.
- Renames nsArenaMemoryStats as nsArenaSizes, and nsWindowSizes::mArenaStats as
nsWindowSizes::mArenaSizes. This is the more usual naming scheme for such
types.
- Renames FRAME_ID_STAT_FIELD as NS_ARENA_SIZES_FIELD.
- Passes nsWindowSizes to PresShell::AddSizeOfIncludingThis() and
nsPresArena::AddSizeOfExcludingThis(), instead of a bunch of smaller things.
One nice consequence is that the odd nsArenaMemoryStats::mOther field is no
longer necessary, because we can update nsWindowSizes::mLayoutPresShellSize
directly in nsPresArena::AddSizeOfExcludingThis().
- Adds |const| to a few methods.
MozReview-Commit-ID: EpgFWKFqy7Y
When an animated image has been discarded, we avoided marking the
composited frame invalid unless it had been previously decoded. Most of
the time this was fine, but if the animated image was still decoding for
the first time, then we still had a composited frame lingering that we
did not mark as invalid. As a result, when we called
RasterImage::LookupFrame (and indirectly
FrameAnimator::GetCompositedFrame), it would always return the
composited frame. This meant that RasterImage::Decode would never be
called to trigger a redecode. At the same time,
FrameAnimator::RequestRefresh would not cause us to advance the frame
because the state was still discarded.
With this patch we separate out the concepts of "has ever requested to
be decoded" and "has ever completed decoding." The former is now used to
control whether or not a composited frame is marked as invalid after we
discover we currently have no surface for the animation -- this solves
the animation remaining frozen as we now request the redecode as
expected. The latter remains used to determine if we actually know the
total number of frames.