The shared memory handle reporting has been generalized to be an
external handle reporting. This is used for both shared memory, and for
volatile memory (on Android.) This will allow us to have a better sense
of just how many handles are being used by images on Android.
Additionally we were not properly reporting forced heap allocated
memory, if we were putting animated frames on the heap. This is because
we used SourceSurfaceAlignedRawData without implementing
AddSizeOfExcludingThis.
image.mem.volatile.min_threshold_kb is the minimum buffer allocation for
an image frame in KB before it will use volatile memory. If it is less
than it will use the heap. This only is set to > 0 on Android.
image.mem.animated.use_heap forces image frames to use the heap if it is
for an animated image. This is only enabled for Android, and was
previously a compile time option also for Android.
Move the initialization of SharedSurfacesParent from the compositor
thread creation to mirror the other WebRender-specific components, such
as the render thread creation. Now it will only be created if WebRender
is in use. Also prevent shared surfaces from being used by the image
frame allocator, even if image.mem.shared is set -- there is no purpose
in allowing this at present. It was causing startup crashes for users
who requested image.mem.shared and/or WebRender via gfx.webrender.all
but did not actually get WebRender at all. Surfaces would get allocated
in the shared memory, try to register themselves with the WR render
thread, and then crash since that thread was never created.
The image decoding thread pool can grow to be quite large, up to 32
threads, depending on the number of processors on the system. If the
user is not actively browsing, these threads are occupying resources
which could be reused elsewhere. After the timeout period, it will
release up to half of the threads in the pool.
Currently imagelib's DecodePool spawns the maximum number of threads
during startup, based on the number of processors. This patch changes it
to spawn a single thread on startup (which cannot fail), and more up to
the maximum as jobs are added to the queue. A thread will only be
spawned if there is a backlog present when a new job is added. This
typically results in fewer threads allocated in the parent process, as
well as deferred spawning in the content processes.
Originally we attempted to finalize the current frame from the contained
decoder in nsICODecoder::FinishResource. This is wrong because we
haven't acquired the frame from the contained decoder yet. This happens
in nsICODecoder::GetFinalStateFromContainedDecoder, and so
imgFrame::Finalize call should be moved there. This was causing us to
use fallback image sharing with WebRender after a GPU process crash,
instead of shared surfaces, because it can't get a new file handle for
the surface data until we have finished writing all of the image data.
Move the initialization of SharedSurfacesParent from the compositor
thread creation to mirror the other WebRender-specific components, such
as the render thread creation. Now it will only be created if WebRender
is in use. Also prevent shared surfaces from being used by the image
frame allocator, even if image.mem.shared is set -- there is no purpose
in allowing this at present. It was causing startup crashes for users
who requested image.mem.shared and/or WebRender via gfx.webrender.all
but did not actually get WebRender at all. Surfaces would get allocated
in the shared memory, try to register themselves with the WR render
thread, and then crash since that thread was never created.
Originally image decoding tasks were processed in a FILO ordering, due
to that being the most efficient way to use an nsTArray as a queue. This
patch changes the decoding pool to use an std::queue to promise FIFO
ordering (relative to the priority of the tasks). This will allow the
first images to be requested to be the first images displayed.
The image decoding thread pool can grow to be quite large, up to 32
threads, depending on the number of processors on the system. If the
user is not actively browsing, these threads are occupying resources
which could be reused elsewhere. After the timeout period, it will
release up to half of the threads in the pool.
Currently imagelib's DecodePool spawns the maximum number of threads
during startup, based on the number of processors. This patch changes it
to spawn a single thread on startup (which cannot fail), and more up to
the maximum as jobs are added to the queue. A thread will only be
spawned if there is a backlog present when a new job is added. This
typically results in fewer threads allocated in the parent process, as
well as deferred spawning in the content processes.
The change to RootAccessible.cpp fixes an obvious bug introduced in bug 741707.
The visibility changes in gfx/thebes are because NS_DECL_ISUPPORTS has a
trailing "public:" that those classes were relying on to have public
constructors.
MozReview-Commit-ID: IeB8KIJCGhU
In order to reduce the log size, increase the snapshot polling timeout
from 1ms to 20ms. Additionally use SimpleTest.requestCompleteLog() to
ensure we get everything when the test eventually fails.
If there is an active provider which has yet to produce a frame, any
calls to SurfaceCache::Lookup will return MatchType::PENDING. If
RasterImage::Lookup gets the above result while given FLAG_SYNC_DECODE,
it will attempt to start a new decoder. It is entirely possible that
when we try to insert the new provider into the SurfaceCache, it cannot
because the original provider finally did produce something. In that
case we should abandon attempting to redecode and retry our lookup.
These asserts are somewhat faulty given the
image.downscale-during-decode.enabled preference is a live preference
and thus can change at any time. Given the decision to downscale is made
on the main thread, and it is asserted on a decoder thread, this will
always be inherently racy. Most of the time this isn't a problem, but
with our automated tests, we frequently flip this preference, and the
assertion may fail unnecessarily with an unrelated image. The reftests
themselves verify downscaling did or did not occur based upon comparison
to the reference, and don't require the assert for verification.
There are two other means from which a caller can get the current state
which originally ignored validation -- GetImageStatus and
StartDecodingWithResult. These methods are used by layout in some
circumstances to decide whether or not the image is ready to display. As
observed in some web platform tests, in particular
css/css-backgrounds-3/background-size-031.html, we may actually validate
and purge the cache for images under test. The state given by the
aforementioned methods was misleading, because validation changed it.
Now they take into account validation, and do not imply any particular
state while validation is in progress.
IProgressObserver::SetNotificationsDeferred is now used just for
ProgressTracker to track when there is a pending notification for
an observer. It has been renamed to MarkPendingNotify and
ClearPendingNotify to make a clear distinction.
When cache validation is in progress, imgRequestProxy defers its
notifications to its listener until the validation is complete. This is
because the cache may be discarded, and the current state will change.
It attempted to share the same flags with notification deferrals used by
ProgressTracker to indicate that there is a pending notification, but
this has problematic/confusing. Hence this patch creates dedicated flags
for notification deferrals due to cache validation.
This patch was autogenerated by my decomponents.py
It covers almost every file with the extension js, jsm, html, py,
xhtml, or xul.
It removes blank lines after removed lines, when the removed lines are
preceded by either blank lines or the start of a new block. The "start
of a new block" is defined fairly hackily: either the line starts with
//, ends with */, ends with {, <![CDATA[, """ or '''. The first two
cover comments, the third one covers JS, the fourth covers JS embedded
in XUL, and the final two cover JS embedded in Python. This also
applies if the removed line was the first line of the file.
It covers the pattern matching cases like "var {classes: Cc,
interfaces: Ci, utils: Cu, results: Cr} = Components;". It'll remove
the entire thing if they are all either Ci, Cr, Cc or Cu, or it will
remove the appropriate ones and leave the residue behind. If there's
only one behind, then it will turn it into a normal, non-pattern
matching variable definition. (For instance, "const { classes: Cc,
Constructor: CC, interfaces: Ci, utils: Cu } = Components" becomes
"const CC = Components.Constructor".)
MozReview-Commit-ID: DeSHcClQ7cG
--HG--
extra : rebase_source : d9c41878036c1ef7766ef5e91a7005025bc1d72b
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
extra : intermediate-source : 34c999fa006bffe8705cf50c54708aa21a962e62
extra : histedit_source : b2be2c5e5d226e6c347312456a6ae339c1e634b0
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : rebase_source : c004a023389f1f6bf3d2f3efe93c13d423b23ccd
This patch adjusts tools/fuzzing/ in such a way that the relevant parts can be
reused in the JS engine. Changes in detail include:
* Various JS_STANDALONE checks to exclude parts that cannot be included in
those builds.
* Turn LibFuzzerRegistry and LibFuzzerRunner into generic FuzzerRegistry and
FuzzerRunner classes and use them for AFL as well. Previously, AFL was
piggy-backing on gtests which was kind of an ugly solution anyway (besides
that it can't work in JS). Now more code like registry and harness is
shared between the two and they follow almost the same call paths and entry
points. AFL macros in FuzzingInterface have been rewritten accordingly.
This also required name changes in various places. Furthermore, this unifies
the way, the fuzzing target is selected, using the FUZZER environment
variable rather than LIBFUZZER (using LIBFUZZER in browser builds will give
you a deprecation warning because I know some people are using this already
and need time to switch). Previously, AFL target had to be selected using
GTEST_FILTER, so this is also much better now.
* I had to split up FuzzingInterface* such that the STREAM parts are in a
separate set of files FuzzingInterfaceStream* because they use nsStringStream
which is not allowed to be included into the JS engine even in a full browser
build (error: "Using XPCOM strings is limited to code linked into libxul.").
I also had to pull FuzzingInterface.cpp (the RAW part only) into the header
and make it static because otherwise, would have to make not only separate
files but also separate libraries to statically link to the JS engine, which
seemed overkill for a single small function. The streaming equivalent of the
function is still in a cpp file.
* LibFuzzerRegister functions are now unique by appending the module name to
avoid redefinition errors.
MozReview-Commit-ID: 44zWCdglnHr
--HG--
extra : rebase_source : fe07c557032fd33257eb701190becfaf85ab79d0
This patch adjusts tools/fuzzing/ in such a way that the relevant parts can be
reused in the JS engine. Changes in detail include:
* Various JS_STANDALONE checks to exclude parts that cannot be included in
those builds.
* Turn LibFuzzerRegistry and LibFuzzerRunner into generic FuzzerRegistry and
FuzzerRunner classes and use them for AFL as well. Previously, AFL was
piggy-backing on gtests which was kind of an ugly solution anyway (besides
that it can't work in JS). Now more code like registry and harness is
shared between the two and they follow almost the same call paths and entry
points. AFL macros in FuzzingInterface have been rewritten accordingly.
This also required name changes in various places. Furthermore, this unifies
the way, the fuzzing target is selected, using the FUZZER environment
variable rather than LIBFUZZER (using LIBFUZZER in browser builds will give
you a deprecation warning because I know some people are using this already
and need time to switch). Previously, AFL target had to be selected using
GTEST_FILTER, so this is also much better now.
* I had to split up FuzzingInterface* such that the STREAM parts are in a
separate set of files FuzzingInterfaceStream* because they use nsStringStream
which is not allowed to be included into the JS engine even in a full browser
build (error: "Using XPCOM strings is limited to code linked into libxul.").
I also had to pull FuzzingInterface.cpp (the RAW part only) into the header
and make it static because otherwise, would have to make not only separate
files but also separate libraries to statically link to the JS engine, which
seemed overkill for a single small function. The streaming equivalent of the
function is still in a cpp file.
* LibFuzzerRegister functions are now unique by appending the module name to
avoid redefinition errors.
MozReview-Commit-ID: 44zWCdglnHr
--HG--
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRunner.cpp => tools/fuzzing/interface/harness/FuzzerRunner.cpp
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRunner.h => tools/fuzzing/interface/harness/FuzzerRunner.h
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerTestHarness.h => tools/fuzzing/interface/harness/FuzzerTestHarness.h
rename : tools/fuzzing/libfuzzer/harness/moz.build => tools/fuzzing/interface/harness/moz.build
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRegistry.cpp => tools/fuzzing/registry/FuzzerRegistry.cpp
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRegistry.h => tools/fuzzing/registry/FuzzerRegistry.h
extra : rebase_source : 7d0511ca0591dbf4d099376011402e063a79ee3b
These are all no-ops because the objects involved are already implementing one of the WebIDL interfaces that pulls in MozImageLoadingContent, and that's all script gets to see.
MozReview-Commit-ID: Io2mLHbv7qM
* changes call to use nsIURIMutator.setSpec()
* Add new NS_MutateURI constructor that takes new Mutator object
* Make nsSimpleNestedURI::Mutate() and nsNestedAboutURI::Mutate() return mutable URIs
* Make the finalizers for nsSimpleNestedURI and nsNestedAboutURI make the returned URIs immutable
MozReview-Commit-ID: 1kcv6zMxnv7
--HG--
extra : rebase_source : 99b13e9dbc8eaaa9615843b05e1539e19b527504
All of these tests have existing fuzzy annotations which cover the
differences in the WR renderings. Therefore we can remove the
fails-if(webrender) annotations and use the existing fuzzy annotations
to treat the tests as passing.
MozReview-Commit-ID: LFWha6gAP2r
--HG--
extra : rebase_source : b26a0d0cd66b6bab273251e6a2de9210417ba798
If we aren't using a downscaler we avoid this bug because the mask is either 100% transparent or 100% opaque, and in the transparent case we just set the whole pixel (32 bits) to 0.
But when we are using a downscaler we just replace the alpha values in the original surface (leaving the color values untouched).
We need to go the full premultiply route because after downscaling the mask we can have any value for alpha instead of just 0 or 255.
This removes an unnecessary level of indirection by replacing all
nsStringGlue.h instances with just nsString.h.
--HG--
extra : rebase_source : 340989240af4018f3ebfd92826ae11b0cb46d019
imgLoader::ValidateEntry would aggressively determine an entry has
expired, even when the request hasn't yet begun. This is because the
expiration time for the entry was not set unless it was for a channel
which supports caching. Now we set the expiration time for all
channels, and if it doesn't support caching, it just expires at the
current time when imgRequest::OnStartRequest is called. Additionally,
imgLoader::ValidateEntry will not consider the expiration time in the
entry until it is non-zero.
Factory::DoesBackendSupportDataDrawtarget already fulfills the same
purpose and we should use that instead, as imgFrame is the only user of
the former API. It has the added bonus of allowing us to use shared
surfaces on Linux with WebRender, and using volatile surfaces on Windows
when D2D is disabled.
The "current URL" in the spec:
https://html.spec.whatwg.org/multipage/embedded-content.html#dom-img-currentsrc
maps to imgIRequest.URI, not currentURI.
Rename imgIRequest.currentURI to finalURI to prevent such confusion.
MozReview-Commit-ID: CjBh2V4z8K9
--HG--
extra : rebase_source : 01277d16ef12845e12cc846f9dd4a21ceeca283b
This also changes URIUtils.cpp:DeserializeURI() to use the mutator to instantiate new URIs, instead of using their default constructor.
MozReview-Commit-ID: JQOvIquuQAP
--HG--
extra : rebase_source : e146624c5ae423f7f69a738aaaafaa55dd0940d9
The "current URL" in the spec:
https://html.spec.whatwg.org/multipage/embedded-content.html#dom-img-currentsrc
maps to imgIRequest.URI, not currentURI.
Rename imgIRequest.currentURI to finalURI to prevent such confusion.
MozReview-Commit-ID: CjBh2V4z8K9
--HG--
extra : rebase_source : d3047aed22f116ff9a74099b646a84e597388673
This is important because it ensures we release the shared memory handle
(although not the data itself) for the underlying surface buffer when it
turns out we will probably never need to share it. If we do need to
share the surface data with the GPU process, it will reallocate a handle
if necessary, and close it when it is finished. On some platforms we
only have a finite number of handles, so if we don't need them, we
should close them.
This is largely trivial because the meat of the implementation is
located in ImageResource and we already added GetFrameInternal.
Interestingly VectorImage::IsUnlocked does not actually check if the
image is locked, but instead only checks for animation consumers. This
is consistent with its historical behavior on when to issue an unlocked
draw event.
Note that we do not implement the original GetImageContainer and
IsImageContainerAvailable APIs. This is because the former does not
accept an SVG context and it would be best to discourage its use in old
code lest we get incorrect/unexpected results.
No functional change aside from the implementation from
VectorImage::GetFrameAtSize being repurposed for GetFrameInternal and
returning an additional error code with the surface.
Creating a DrawTarget can be an expensive operation. This is especially
true in this case because checking for a cached already decoded version
of the VectorImage is expected to be fast. Currently VectorImage::Draw
is the typical path to render these images, but in the future, getting
the frames directly or indirectly (through an ImageContainer) will
become more common.
When FLAG_HIGH_QUALITY_SCALING is used, we need to make sure we continue
using that flag when we update the container. We should also use it for
comparing whether or not an existing image container is equivalent.
This adds IsImageContainerAvailableAtSize and GetImageContainerAtSize to
the imgIContainer interface, as well as stubbing it for all of the
classes which implement it. The real implementations will follow for the
more complicated classes (RasterImage, VectorImage).
Exposure of this functionality comes in a later patch in the set.
Experimental testing with WebRender and image layers enabled suggests
most of the time we are not using more than one image container per
image, hence why mImageContainers has room for one container without a
malloc.
RasterImage::GetCurrentImage can only return a subset of the DrawResult
values, and the original RasterImage::GetImageContainer implementation
relied upon this behavior. Now we handle them all to ensure that when
other image implementations reuse it, they may return any valid
DrawResult and get the expected results.
As part of the move, we add a IntSize parameter to
ImageResource::GetCurrentImage. This is because we don't have access to
the image's size (yet) from ImageResource, but additionally because we
will need this anyways when we support multiple image containers at
different sizes.
The only change to the moved implementation is that we no longer have
access to RasterImage::mHasSize and RasterImage::mSize. Thus we rely
upon imgIContainer::IsImageContainerAvailable to perform these checks.
This state will eventually be used by VectorImage when it supports image
containers. For now, it is harmless beyond using slightly more memory
for SVGs.
An imgRequestProxy may defer notifications when it needs to block on an
imgCacheValidator. It may also be cancelled before the validator has
completed its operation, but before this change, we did not remove the
request from the set of proxies, imgCacheValidator::mProxies. When the
deferral was completed, it would assert to ensure each proxy was still
expecting a deferral before issuing the notifications. Cancelling a
request can actually reset that state, which means we fail the assert.
Failing the assert is actually harmless; in release we suffer no
negative consequences as a result of this sequence of events. Now we
just remove the proxy from the validator set to avoid asserting.
The core of this change is in gfxContext.*:
- change gfxContext::CurrentMatrix() and gfxContext::SetMatrix() to
return and take a Matrix respectively, instead of converting to
and from a gfxMatrix (which uses doubles). These functions therefore
will now match the native representation of the transform in gfxContext.
- add two new functions CurrentMatrixDouble() and SetMatrixDouble() that
do what the old CurrentMatrix() and SetMatrix() used to do, i.e.
convert between the float matrix and the double matrix.
The rest of the change is just updating the call sites to avoid round-
tripping between floats and doubles where possible. Call sites that are
hard to fix are migrated to the new XXXDouble functions which preserves
the existing behaviour.
MozReview-Commit-ID: 5sbBpLUus3U
imgRequestProxy::CancelAndForgetObserver was intended to always dispatch
any load group removals due to reentracy conflicts with the callers.
However in bug 1404422 the fact that imgRequest::RemoveProxy can
indirectly trigger a load group removal through completing an
incompleted request.
Historically imgRequestProxy::PerformClone would only add the cloned
request to the (original proxy's) document's load group if the request
was still being validated. Now it adds the cloned request to the given
document's load group before requesting the notifications, unless the
request has already been completed. We ensure that any removals from
the load group occur outside the current execution context.
Legacy listeners may use imgRequestProxy::SyncClone to request
notifications on the image state. Ideally they would not, but they do
not work as expected with the asynchronous notifications all new callers
must use. While in theory this would suggest their code is re-entrant,
not all of it is. In particular we need to be sensitive about when we
remove a request from a load group.
There should be no functional change here, but we rely upon the new
structure in the next patch in the series. This separates out the
notions of removing a request from the load group (which is always
final, and must be executed outside of synchronous calls from the owner
of the imgRequestProxy) and wanting to readd a request to the load group
as a background request (for multipart images).
The most important addition is mForceDispatchLoadGroup which if true
when imgRequestProxy::RemoveFromLoadGroup is called, will dispatch the
removal from the load group instead of executing it inline. This ensures
safety for any callers (e.g. to CancelAndForgetObserver) as above.
imgRequestProxy::SetLoadGroup did not have a predictable effect and
it appears to be unused. It is somewhat complicated to support given
we must be sensitive about what context we execute removing the
request from the original load group.
imgLoader::LoadImage now asserts in debug builds that the load group
given as a parameter matches that of the given document (if any). If
they mismatch, then we won't be blocking the document's load event as we
expect with the future removal of the imgIOnloadBlocker.
imgLoader::LoadImageWithChannel never actually added the request to the
load group at all, unless it was done as part of the validator. Now it
will consistently add the request to the channel's load group as
expected. Additionally it also asserts in debug builds that the
channel's load group matches that of the given document, as in
LoadImage.
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive. This patch splits that macro into three new ones that are harder to
mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 3e2bbec4737b696e1c86579ae54be4cb3186c100
Most cases where the pointer is stored into an already-declared variable can
trivially be changed to MakeNotNull<T*>, as the NotNull raw pointer will end
up in a smart pointer.
In RAII cases, the target type can be specified (e.g.:
`MakeNotNull<RefPtr<imgFrame>>)`), in which case the variable type may just be
`auto`, similar to the common use of MakeUnique.
Except when the target type is a base pointer, in which case it must be
specified in the declaration.
MozReview-Commit-ID: BYaSsvMhiDi
--HG--
extra : rebase_source : 8fe6f2aeaff5f515b7af2276c439004fa3a1f3ab
We're currently fairly vague and inconsistent about the values we provide to
content policy implementations for requestOrigin and requestPrincipal. In some
cases they're the triggering principal, sometimes the loading principal,
sometimes the channel principal.
Our existing content policy implementations which require or expect a loading
principal currently retrieve it from the context node. Since no current
callers require the principal to be the loading principal, and some already
expect it to be the triggering principal (which there's currently no other way
to retrieve), I chose to pass the triggering principal whenever possible, but
use the loading principal to determine the origin URL.
As a follow-up, I'd like to change the nsIContentPolicy interface to
explicitly receive loading and triggering principals, or possibly just
LoadInfo instances, rather than poorly-defined request
origin/principal/context args. But since that may cause trouble for
comm-central, I'd rather not do it as part of this bug.
MozReview-Commit-ID: LqD9GxdzMte
--HG--
extra : rebase_source : 41ce439912ae7b895e0a3b0e660fa6ba571eb50f
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 53c8b43b6a1be06d00618a133e28bf95c46a3ba3
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 59f77df0124249bfd11fee3585420a17b4201d37
The imgLoader code consistently uses the term 'loadingPrincipal' for the
principal that is called the triggeringPrincipal everywhere else it's used.
This is confusing, and since we need to make changes to how those values are
determined, it should be fixed beforehand.
MozReview-Commit-ID: 8CTHwayzcaD
--HG--
extra : rebase_source : d4405b0ecfe1c8dfb9bfdf61fe6ed6cfb180ba83
Currently the Gecko Profiler defines a moderate amount of stuff when
MOZ_GECKO_PROFILER is undefined. It also #includes various headers, including
JS ones. This is making it difficult to separate Gecko's media stack for
inclusion in Servo.
This patch greatly simplifies how things are exposed. The starting point is:
- GeckoProfiler.h can be #included unconditionally;
- everything else from the profiler must be guarded by MOZ_GECKO_PROFILER.
In practice this introduces way too many #ifdefs, so the patch loosens it by
adding no-op macros for a number of the most common operations.
The net result is that #ifdefs and macros are used a bit more, but almost
nothing is exposed in non-MOZ_GECKO_PROFILER builds (including
ProfilerMarkerPayload.h and GeckoProfiler.h), and understanding what is exposed
is much simpler than before.
Note also that in BHR, ThreadStackHelper is now entirely absent in
non-MOZ_GECKO_PROFILER builds.
This patch:
- adds fails-if annotations for all the reftests that were consistently failing
with layers-free turned on.
- removes fails-if or reduces the range on fuzzy-if annotations for all
the reftests that were producing UNEXPECTED-PASS results with
layers-free turned on.
- adds skip-if, random-if, or fuzzy-if annotations to the reftests that
were intermittently failing due to timeout, obvious incorrectness, or
slight pixel differences, respectively.
MozReview-Commit-ID: A0Aknn6rnjj
--HG--
extra : rebase_source : 420d9cf43f23a5d654fa36eec69138937d13c173
Currently we only permit requests from HTTP channels to be retargeted to
the image IO thread. It was implemented this way originally in bug
867755 but it does not appear there was a specific reason for that.
The only kink in this is some browser chrome mochitests listen on debug
build only events to ensure certain chrome images are loaded and/or
drawn. As such, this patch ensures that those observer notifications
continue to be served, requiring a dispatch from the image IO thread to
the main thread.
Another issue to note is that SVGs must be processed on the main thread;
the underlying SVG document can only be accessed from it. We enforce
this by checking the content type. The possibility already exists that
an HTTP response could contain the wrong content type, and in that case,
we fail to decode the image, as there is no content sniffing support for
SVG. Thus there should be no additional risk taken by using the image IO
thread from other non-HTTP channels (if they don't specify the SVG
content type, it is not rendered today, and if they do, it will remain
on the main thread as it is today).
We also ignore data URIs. The specification requires that we process
these images sychronously. See bug 1325080 for details.
There are a number of operations with the surface cache which may result
in individual surfaces for a particular image cache to be removed. If an
image cache is emptied, and we are in factor of 2 mode, we should reset
it to the default mode, because we require at least one surface to be
available to determine the native/ideal size. Additionally, if the cache
is not locked, it should be removed entirely from the surface cache. We
handle this correctly in methods such as Lookup and LookupBestMatch, but
Prune and CollectSizeOfSurfaces can also cause this to happen, as
recently done in bug 1370412 and bug 1380649.
In order to let necko postpone the load of favicon, we have to set request context ID to the http channel that is created to load favicon.
This patch starts with passing a request context ID to nsContentUtils::LoadImage and makes other necessary changes to set the request context ID to the channel.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When the surface cache starts tracking an unlocked surface, it must
insert it into the expiration tracker, so that it can be freed later if
it is remains unused. ExpirationTrackerImpl::AddObjectLocked can fail
due to out-of-memory conditions or during shutdown, which we previously
ignored, and could leave us in a state where we think the surface is in
the tracker but is not. When we later try to mark the surface as used in
the tracker, it will hit a release assert because it doesn't exist. Now
we handle the insertion failure by discarding the surface. Marking the
surface as used can itself encounter a similar issue, and we handle it
the same way.
MozReview-Commit-ID: Kv6l0znnG48
An ImageSurfaceCache cannot enter factor-of-2 mode without a minimum
number of surfaces being present in its cache. However those surfaces
can be purged from the cache through various means (expire due to being
disuse, volatile buffers purged, etc). Also, it is entirely possible
that all the surfaces get purged, but the cache itself remains. Since
factor-of-2 mode requires at least one surface (to get the owning image
and its native size), we need to handle the case when the cache is
emptied appropriately. As such, we now reset the factor-of-2 mode (and
its pruned state) to the default (false) if we transition from non-empty
to empty.
MozReview-Commit-ID: EVaEqW59Asv
When SurfaceCache::Lookup is called to access surface data, it indicates
that the caller will not accept substitutes as in the case of
SurfaceCache::LookupBestMatch. As such, we need to be careful not to
remove those surfaces from our cache when pruning (in part 8b). This is
the marker used to track that, at some point, there was a caller which
got this surface that would accept no other (e.g. factor of 2 mode must
make an accept for this particular surface).
In bug 1383499 we fixed the case where on Android, animated images could
consume all of the available file handles. This is because each volatile
buffer will contain a file handle on Android, and animated images can
contain many, many frames. However in doing so we introduced a bug where
the stride of replacement surface was aligned to a 16-byte boundary. We
do not currently support any stride value but pixel size * image width
in the image decoding framework. This may be something we correct in the
future but for now, we should just ensure all surfaces follow the
expected stride value.
This is straightforward, with only two notable things.
- `#include "nsXPIDLString.h" is replaced with `#include "nsString.h"`
throughout, because all nsXPIDLString.h did was include nsString.h. The
exception is for files which already include nsString.h, in which case the
patch just removes the nsXPIDLString.h inclusion.
- The patch removes the |xpidl_string| gtest, but improves the |voided| test to
cover some of its ground, e.g. testing Adopt(nullptr).
--HG--
extra : rebase_source : 452cc4a08046a1adb1a8099a7e85a1917de5add8