Historically imgRequestProxy::PerformClone would only add the cloned
request to the (original proxy's) document's load group if the request
was still being validated. Now it adds the cloned request to the given
document's load group before requesting the notifications, unless the
request has already been completed. We ensure that any removals from
the load group occur outside the current execution context.
Legacy listeners may use imgRequestProxy::SyncClone to request
notifications on the image state. Ideally they would not, but they do
not work as expected with the asynchronous notifications all new callers
must use. While in theory this would suggest their code is re-entrant,
not all of it is. In particular we need to be sensitive about when we
remove a request from a load group.
There should be no functional change here, but we rely upon the new
structure in the next patch in the series. This separates out the
notions of removing a request from the load group (which is always
final, and must be executed outside of synchronous calls from the owner
of the imgRequestProxy) and wanting to readd a request to the load group
as a background request (for multipart images).
The most important addition is mForceDispatchLoadGroup which if true
when imgRequestProxy::RemoveFromLoadGroup is called, will dispatch the
removal from the load group instead of executing it inline. This ensures
safety for any callers (e.g. to CancelAndForgetObserver) as above.
imgRequestProxy::SetLoadGroup did not have a predictable effect and
it appears to be unused. It is somewhat complicated to support given
we must be sensitive about what context we execute removing the
request from the original load group.
imgLoader::LoadImage now asserts in debug builds that the load group
given as a parameter matches that of the given document (if any). If
they mismatch, then we won't be blocking the document's load event as we
expect with the future removal of the imgIOnloadBlocker.
imgLoader::LoadImageWithChannel never actually added the request to the
load group at all, unless it was done as part of the validator. Now it
will consistently add the request to the channel's load group as
expected. Additionally it also asserts in debug builds that the
channel's load group matches that of the given document, as in
LoadImage.
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive. This patch splits that macro into three new ones that are harder to
mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 3e2bbec4737b696e1c86579ae54be4cb3186c100
Most cases where the pointer is stored into an already-declared variable can
trivially be changed to MakeNotNull<T*>, as the NotNull raw pointer will end
up in a smart pointer.
In RAII cases, the target type can be specified (e.g.:
`MakeNotNull<RefPtr<imgFrame>>)`), in which case the variable type may just be
`auto`, similar to the common use of MakeUnique.
Except when the target type is a base pointer, in which case it must be
specified in the declaration.
MozReview-Commit-ID: BYaSsvMhiDi
--HG--
extra : rebase_source : 8fe6f2aeaff5f515b7af2276c439004fa3a1f3ab
We're currently fairly vague and inconsistent about the values we provide to
content policy implementations for requestOrigin and requestPrincipal. In some
cases they're the triggering principal, sometimes the loading principal,
sometimes the channel principal.
Our existing content policy implementations which require or expect a loading
principal currently retrieve it from the context node. Since no current
callers require the principal to be the loading principal, and some already
expect it to be the triggering principal (which there's currently no other way
to retrieve), I chose to pass the triggering principal whenever possible, but
use the loading principal to determine the origin URL.
As a follow-up, I'd like to change the nsIContentPolicy interface to
explicitly receive loading and triggering principals, or possibly just
LoadInfo instances, rather than poorly-defined request
origin/principal/context args. But since that may cause trouble for
comm-central, I'd rather not do it as part of this bug.
MozReview-Commit-ID: LqD9GxdzMte
--HG--
extra : rebase_source : 41ce439912ae7b895e0a3b0e660fa6ba571eb50f
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 53c8b43b6a1be06d00618a133e28bf95c46a3ba3
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 59f77df0124249bfd11fee3585420a17b4201d37
The imgLoader code consistently uses the term 'loadingPrincipal' for the
principal that is called the triggeringPrincipal everywhere else it's used.
This is confusing, and since we need to make changes to how those values are
determined, it should be fixed beforehand.
MozReview-Commit-ID: 8CTHwayzcaD
--HG--
extra : rebase_source : d4405b0ecfe1c8dfb9bfdf61fe6ed6cfb180ba83
Currently the Gecko Profiler defines a moderate amount of stuff when
MOZ_GECKO_PROFILER is undefined. It also #includes various headers, including
JS ones. This is making it difficult to separate Gecko's media stack for
inclusion in Servo.
This patch greatly simplifies how things are exposed. The starting point is:
- GeckoProfiler.h can be #included unconditionally;
- everything else from the profiler must be guarded by MOZ_GECKO_PROFILER.
In practice this introduces way too many #ifdefs, so the patch loosens it by
adding no-op macros for a number of the most common operations.
The net result is that #ifdefs and macros are used a bit more, but almost
nothing is exposed in non-MOZ_GECKO_PROFILER builds (including
ProfilerMarkerPayload.h and GeckoProfiler.h), and understanding what is exposed
is much simpler than before.
Note also that in BHR, ThreadStackHelper is now entirely absent in
non-MOZ_GECKO_PROFILER builds.
This patch:
- adds fails-if annotations for all the reftests that were consistently failing
with layers-free turned on.
- removes fails-if or reduces the range on fuzzy-if annotations for all
the reftests that were producing UNEXPECTED-PASS results with
layers-free turned on.
- adds skip-if, random-if, or fuzzy-if annotations to the reftests that
were intermittently failing due to timeout, obvious incorrectness, or
slight pixel differences, respectively.
MozReview-Commit-ID: A0Aknn6rnjj
--HG--
extra : rebase_source : 420d9cf43f23a5d654fa36eec69138937d13c173
Currently we only permit requests from HTTP channels to be retargeted to
the image IO thread. It was implemented this way originally in bug
867755 but it does not appear there was a specific reason for that.
The only kink in this is some browser chrome mochitests listen on debug
build only events to ensure certain chrome images are loaded and/or
drawn. As such, this patch ensures that those observer notifications
continue to be served, requiring a dispatch from the image IO thread to
the main thread.
Another issue to note is that SVGs must be processed on the main thread;
the underlying SVG document can only be accessed from it. We enforce
this by checking the content type. The possibility already exists that
an HTTP response could contain the wrong content type, and in that case,
we fail to decode the image, as there is no content sniffing support for
SVG. Thus there should be no additional risk taken by using the image IO
thread from other non-HTTP channels (if they don't specify the SVG
content type, it is not rendered today, and if they do, it will remain
on the main thread as it is today).
We also ignore data URIs. The specification requires that we process
these images sychronously. See bug 1325080 for details.
There are a number of operations with the surface cache which may result
in individual surfaces for a particular image cache to be removed. If an
image cache is emptied, and we are in factor of 2 mode, we should reset
it to the default mode, because we require at least one surface to be
available to determine the native/ideal size. Additionally, if the cache
is not locked, it should be removed entirely from the surface cache. We
handle this correctly in methods such as Lookup and LookupBestMatch, but
Prune and CollectSizeOfSurfaces can also cause this to happen, as
recently done in bug 1370412 and bug 1380649.
In order to let necko postpone the load of favicon, we have to set request context ID to the http channel that is created to load favicon.
This patch starts with passing a request context ID to nsContentUtils::LoadImage and makes other necessary changes to set the request context ID to the channel.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When the surface cache starts tracking an unlocked surface, it must
insert it into the expiration tracker, so that it can be freed later if
it is remains unused. ExpirationTrackerImpl::AddObjectLocked can fail
due to out-of-memory conditions or during shutdown, which we previously
ignored, and could leave us in a state where we think the surface is in
the tracker but is not. When we later try to mark the surface as used in
the tracker, it will hit a release assert because it doesn't exist. Now
we handle the insertion failure by discarding the surface. Marking the
surface as used can itself encounter a similar issue, and we handle it
the same way.
MozReview-Commit-ID: Kv6l0znnG48
An ImageSurfaceCache cannot enter factor-of-2 mode without a minimum
number of surfaces being present in its cache. However those surfaces
can be purged from the cache through various means (expire due to being
disuse, volatile buffers purged, etc). Also, it is entirely possible
that all the surfaces get purged, but the cache itself remains. Since
factor-of-2 mode requires at least one surface (to get the owning image
and its native size), we need to handle the case when the cache is
emptied appropriately. As such, we now reset the factor-of-2 mode (and
its pruned state) to the default (false) if we transition from non-empty
to empty.
MozReview-Commit-ID: EVaEqW59Asv
When SurfaceCache::Lookup is called to access surface data, it indicates
that the caller will not accept substitutes as in the case of
SurfaceCache::LookupBestMatch. As such, we need to be careful not to
remove those surfaces from our cache when pruning (in part 8b). This is
the marker used to track that, at some point, there was a caller which
got this surface that would accept no other (e.g. factor of 2 mode must
make an accept for this particular surface).