This patch adjusts tools/fuzzing/ in such a way that the relevant parts can be
reused in the JS engine. Changes in detail include:
* Various JS_STANDALONE checks to exclude parts that cannot be included in
those builds.
* Turn LibFuzzerRegistry and LibFuzzerRunner into generic FuzzerRegistry and
FuzzerRunner classes and use them for AFL as well. Previously, AFL was
piggy-backing on gtests which was kind of an ugly solution anyway (besides
that it can't work in JS). Now more code like registry and harness is
shared between the two and they follow almost the same call paths and entry
points. AFL macros in FuzzingInterface have been rewritten accordingly.
This also required name changes in various places. Furthermore, this unifies
the way, the fuzzing target is selected, using the FUZZER environment
variable rather than LIBFUZZER (using LIBFUZZER in browser builds will give
you a deprecation warning because I know some people are using this already
and need time to switch). Previously, AFL target had to be selected using
GTEST_FILTER, so this is also much better now.
* I had to split up FuzzingInterface* such that the STREAM parts are in a
separate set of files FuzzingInterfaceStream* because they use nsStringStream
which is not allowed to be included into the JS engine even in a full browser
build (error: "Using XPCOM strings is limited to code linked into libxul.").
I also had to pull FuzzingInterface.cpp (the RAW part only) into the header
and make it static because otherwise, would have to make not only separate
files but also separate libraries to statically link to the JS engine, which
seemed overkill for a single small function. The streaming equivalent of the
function is still in a cpp file.
* LibFuzzerRegister functions are now unique by appending the module name to
avoid redefinition errors.
MozReview-Commit-ID: 44zWCdglnHr
--HG--
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRunner.cpp => tools/fuzzing/interface/harness/FuzzerRunner.cpp
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRunner.h => tools/fuzzing/interface/harness/FuzzerRunner.h
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerTestHarness.h => tools/fuzzing/interface/harness/FuzzerTestHarness.h
rename : tools/fuzzing/libfuzzer/harness/moz.build => tools/fuzzing/interface/harness/moz.build
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRegistry.cpp => tools/fuzzing/registry/FuzzerRegistry.cpp
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRegistry.h => tools/fuzzing/registry/FuzzerRegistry.h
extra : rebase_source : 7d0511ca0591dbf4d099376011402e063a79ee3b
These are all no-ops because the objects involved are already implementing one of the WebIDL interfaces that pulls in MozImageLoadingContent, and that's all script gets to see.
MozReview-Commit-ID: Io2mLHbv7qM
* changes call to use nsIURIMutator.setSpec()
* Add new NS_MutateURI constructor that takes new Mutator object
* Make nsSimpleNestedURI::Mutate() and nsNestedAboutURI::Mutate() return mutable URIs
* Make the finalizers for nsSimpleNestedURI and nsNestedAboutURI make the returned URIs immutable
MozReview-Commit-ID: 1kcv6zMxnv7
--HG--
extra : rebase_source : 99b13e9dbc8eaaa9615843b05e1539e19b527504
All of these tests have existing fuzzy annotations which cover the
differences in the WR renderings. Therefore we can remove the
fails-if(webrender) annotations and use the existing fuzzy annotations
to treat the tests as passing.
MozReview-Commit-ID: LFWha6gAP2r
--HG--
extra : rebase_source : b26a0d0cd66b6bab273251e6a2de9210417ba798
If we aren't using a downscaler we avoid this bug because the mask is either 100% transparent or 100% opaque, and in the transparent case we just set the whole pixel (32 bits) to 0.
But when we are using a downscaler we just replace the alpha values in the original surface (leaving the color values untouched).
We need to go the full premultiply route because after downscaling the mask we can have any value for alpha instead of just 0 or 255.
This removes an unnecessary level of indirection by replacing all
nsStringGlue.h instances with just nsString.h.
--HG--
extra : rebase_source : 340989240af4018f3ebfd92826ae11b0cb46d019
imgLoader::ValidateEntry would aggressively determine an entry has
expired, even when the request hasn't yet begun. This is because the
expiration time for the entry was not set unless it was for a channel
which supports caching. Now we set the expiration time for all
channels, and if it doesn't support caching, it just expires at the
current time when imgRequest::OnStartRequest is called. Additionally,
imgLoader::ValidateEntry will not consider the expiration time in the
entry until it is non-zero.
Factory::DoesBackendSupportDataDrawtarget already fulfills the same
purpose and we should use that instead, as imgFrame is the only user of
the former API. It has the added bonus of allowing us to use shared
surfaces on Linux with WebRender, and using volatile surfaces on Windows
when D2D is disabled.
The "current URL" in the spec:
https://html.spec.whatwg.org/multipage/embedded-content.html#dom-img-currentsrc
maps to imgIRequest.URI, not currentURI.
Rename imgIRequest.currentURI to finalURI to prevent such confusion.
MozReview-Commit-ID: CjBh2V4z8K9
--HG--
extra : rebase_source : 01277d16ef12845e12cc846f9dd4a21ceeca283b
This also changes URIUtils.cpp:DeserializeURI() to use the mutator to instantiate new URIs, instead of using their default constructor.
MozReview-Commit-ID: JQOvIquuQAP
--HG--
extra : rebase_source : e146624c5ae423f7f69a738aaaafaa55dd0940d9
The "current URL" in the spec:
https://html.spec.whatwg.org/multipage/embedded-content.html#dom-img-currentsrc
maps to imgIRequest.URI, not currentURI.
Rename imgIRequest.currentURI to finalURI to prevent such confusion.
MozReview-Commit-ID: CjBh2V4z8K9
--HG--
extra : rebase_source : d3047aed22f116ff9a74099b646a84e597388673
This is important because it ensures we release the shared memory handle
(although not the data itself) for the underlying surface buffer when it
turns out we will probably never need to share it. If we do need to
share the surface data with the GPU process, it will reallocate a handle
if necessary, and close it when it is finished. On some platforms we
only have a finite number of handles, so if we don't need them, we
should close them.
This is largely trivial because the meat of the implementation is
located in ImageResource and we already added GetFrameInternal.
Interestingly VectorImage::IsUnlocked does not actually check if the
image is locked, but instead only checks for animation consumers. This
is consistent with its historical behavior on when to issue an unlocked
draw event.
Note that we do not implement the original GetImageContainer and
IsImageContainerAvailable APIs. This is because the former does not
accept an SVG context and it would be best to discourage its use in old
code lest we get incorrect/unexpected results.
No functional change aside from the implementation from
VectorImage::GetFrameAtSize being repurposed for GetFrameInternal and
returning an additional error code with the surface.
Creating a DrawTarget can be an expensive operation. This is especially
true in this case because checking for a cached already decoded version
of the VectorImage is expected to be fast. Currently VectorImage::Draw
is the typical path to render these images, but in the future, getting
the frames directly or indirectly (through an ImageContainer) will
become more common.
When FLAG_HIGH_QUALITY_SCALING is used, we need to make sure we continue
using that flag when we update the container. We should also use it for
comparing whether or not an existing image container is equivalent.
This adds IsImageContainerAvailableAtSize and GetImageContainerAtSize to
the imgIContainer interface, as well as stubbing it for all of the
classes which implement it. The real implementations will follow for the
more complicated classes (RasterImage, VectorImage).
Exposure of this functionality comes in a later patch in the set.
Experimental testing with WebRender and image layers enabled suggests
most of the time we are not using more than one image container per
image, hence why mImageContainers has room for one container without a
malloc.
RasterImage::GetCurrentImage can only return a subset of the DrawResult
values, and the original RasterImage::GetImageContainer implementation
relied upon this behavior. Now we handle them all to ensure that when
other image implementations reuse it, they may return any valid
DrawResult and get the expected results.
As part of the move, we add a IntSize parameter to
ImageResource::GetCurrentImage. This is because we don't have access to
the image's size (yet) from ImageResource, but additionally because we
will need this anyways when we support multiple image containers at
different sizes.
The only change to the moved implementation is that we no longer have
access to RasterImage::mHasSize and RasterImage::mSize. Thus we rely
upon imgIContainer::IsImageContainerAvailable to perform these checks.
This state will eventually be used by VectorImage when it supports image
containers. For now, it is harmless beyond using slightly more memory
for SVGs.
An imgRequestProxy may defer notifications when it needs to block on an
imgCacheValidator. It may also be cancelled before the validator has
completed its operation, but before this change, we did not remove the
request from the set of proxies, imgCacheValidator::mProxies. When the
deferral was completed, it would assert to ensure each proxy was still
expecting a deferral before issuing the notifications. Cancelling a
request can actually reset that state, which means we fail the assert.
Failing the assert is actually harmless; in release we suffer no
negative consequences as a result of this sequence of events. Now we
just remove the proxy from the validator set to avoid asserting.
The core of this change is in gfxContext.*:
- change gfxContext::CurrentMatrix() and gfxContext::SetMatrix() to
return and take a Matrix respectively, instead of converting to
and from a gfxMatrix (which uses doubles). These functions therefore
will now match the native representation of the transform in gfxContext.
- add two new functions CurrentMatrixDouble() and SetMatrixDouble() that
do what the old CurrentMatrix() and SetMatrix() used to do, i.e.
convert between the float matrix and the double matrix.
The rest of the change is just updating the call sites to avoid round-
tripping between floats and doubles where possible. Call sites that are
hard to fix are migrated to the new XXXDouble functions which preserves
the existing behaviour.
MozReview-Commit-ID: 5sbBpLUus3U
imgRequestProxy::CancelAndForgetObserver was intended to always dispatch
any load group removals due to reentracy conflicts with the callers.
However in bug 1404422 the fact that imgRequest::RemoveProxy can
indirectly trigger a load group removal through completing an
incompleted request.
Historically imgRequestProxy::PerformClone would only add the cloned
request to the (original proxy's) document's load group if the request
was still being validated. Now it adds the cloned request to the given
document's load group before requesting the notifications, unless the
request has already been completed. We ensure that any removals from
the load group occur outside the current execution context.
Legacy listeners may use imgRequestProxy::SyncClone to request
notifications on the image state. Ideally they would not, but they do
not work as expected with the asynchronous notifications all new callers
must use. While in theory this would suggest their code is re-entrant,
not all of it is. In particular we need to be sensitive about when we
remove a request from a load group.
There should be no functional change here, but we rely upon the new
structure in the next patch in the series. This separates out the
notions of removing a request from the load group (which is always
final, and must be executed outside of synchronous calls from the owner
of the imgRequestProxy) and wanting to readd a request to the load group
as a background request (for multipart images).
The most important addition is mForceDispatchLoadGroup which if true
when imgRequestProxy::RemoveFromLoadGroup is called, will dispatch the
removal from the load group instead of executing it inline. This ensures
safety for any callers (e.g. to CancelAndForgetObserver) as above.
imgRequestProxy::SetLoadGroup did not have a predictable effect and
it appears to be unused. It is somewhat complicated to support given
we must be sensitive about what context we execute removing the
request from the original load group.
imgLoader::LoadImage now asserts in debug builds that the load group
given as a parameter matches that of the given document (if any). If
they mismatch, then we won't be blocking the document's load event as we
expect with the future removal of the imgIOnloadBlocker.
imgLoader::LoadImageWithChannel never actually added the request to the
load group at all, unless it was done as part of the validator. Now it
will consistently add the request to the channel's load group as
expected. Additionally it also asserts in debug builds that the
channel's load group matches that of the given document, as in
LoadImage.
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive. This patch splits that macro into three new ones that are harder to
mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 3e2bbec4737b696e1c86579ae54be4cb3186c100
Most cases where the pointer is stored into an already-declared variable can
trivially be changed to MakeNotNull<T*>, as the NotNull raw pointer will end
up in a smart pointer.
In RAII cases, the target type can be specified (e.g.:
`MakeNotNull<RefPtr<imgFrame>>)`), in which case the variable type may just be
`auto`, similar to the common use of MakeUnique.
Except when the target type is a base pointer, in which case it must be
specified in the declaration.
MozReview-Commit-ID: BYaSsvMhiDi
--HG--
extra : rebase_source : 8fe6f2aeaff5f515b7af2276c439004fa3a1f3ab
We're currently fairly vague and inconsistent about the values we provide to
content policy implementations for requestOrigin and requestPrincipal. In some
cases they're the triggering principal, sometimes the loading principal,
sometimes the channel principal.
Our existing content policy implementations which require or expect a loading
principal currently retrieve it from the context node. Since no current
callers require the principal to be the loading principal, and some already
expect it to be the triggering principal (which there's currently no other way
to retrieve), I chose to pass the triggering principal whenever possible, but
use the loading principal to determine the origin URL.
As a follow-up, I'd like to change the nsIContentPolicy interface to
explicitly receive loading and triggering principals, or possibly just
LoadInfo instances, rather than poorly-defined request
origin/principal/context args. But since that may cause trouble for
comm-central, I'd rather not do it as part of this bug.
MozReview-Commit-ID: LqD9GxdzMte
--HG--
extra : rebase_source : 41ce439912ae7b895e0a3b0e660fa6ba571eb50f
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 53c8b43b6a1be06d00618a133e28bf95c46a3ba3
It's easy to mess up the scoping so that (a) the label is pushed and then
immediately popped, and/or (b) the string doesn't live long enough. It's also
easy to do a utf16-to-utf8 conversion unnecessarily when the profiler is
inactive.
This patch splits that macro into three new ones that are harder to mess up.
- AUTO_PROFILER_LABEL_DYNAMIC_CSTR: same as current.
- AUTO_PROFILER_LABEL_DYNAMIC_NSCSTRING: for nsCStrings.
- AUTO_PROFILER_LABEL_DYNAMIC_LOSSY_NSSTRING: for nsStrings.
--HG--
extra : rebase_source : 59f77df0124249bfd11fee3585420a17b4201d37
The imgLoader code consistently uses the term 'loadingPrincipal' for the
principal that is called the triggeringPrincipal everywhere else it's used.
This is confusing, and since we need to make changes to how those values are
determined, it should be fixed beforehand.
MozReview-Commit-ID: 8CTHwayzcaD
--HG--
extra : rebase_source : d4405b0ecfe1c8dfb9bfdf61fe6ed6cfb180ba83
Currently the Gecko Profiler defines a moderate amount of stuff when
MOZ_GECKO_PROFILER is undefined. It also #includes various headers, including
JS ones. This is making it difficult to separate Gecko's media stack for
inclusion in Servo.
This patch greatly simplifies how things are exposed. The starting point is:
- GeckoProfiler.h can be #included unconditionally;
- everything else from the profiler must be guarded by MOZ_GECKO_PROFILER.
In practice this introduces way too many #ifdefs, so the patch loosens it by
adding no-op macros for a number of the most common operations.
The net result is that #ifdefs and macros are used a bit more, but almost
nothing is exposed in non-MOZ_GECKO_PROFILER builds (including
ProfilerMarkerPayload.h and GeckoProfiler.h), and understanding what is exposed
is much simpler than before.
Note also that in BHR, ThreadStackHelper is now entirely absent in
non-MOZ_GECKO_PROFILER builds.
This patch:
- adds fails-if annotations for all the reftests that were consistently failing
with layers-free turned on.
- removes fails-if or reduces the range on fuzzy-if annotations for all
the reftests that were producing UNEXPECTED-PASS results with
layers-free turned on.
- adds skip-if, random-if, or fuzzy-if annotations to the reftests that
were intermittently failing due to timeout, obvious incorrectness, or
slight pixel differences, respectively.
MozReview-Commit-ID: A0Aknn6rnjj
--HG--
extra : rebase_source : 420d9cf43f23a5d654fa36eec69138937d13c173
Currently we only permit requests from HTTP channels to be retargeted to
the image IO thread. It was implemented this way originally in bug
867755 but it does not appear there was a specific reason for that.
The only kink in this is some browser chrome mochitests listen on debug
build only events to ensure certain chrome images are loaded and/or
drawn. As such, this patch ensures that those observer notifications
continue to be served, requiring a dispatch from the image IO thread to
the main thread.
Another issue to note is that SVGs must be processed on the main thread;
the underlying SVG document can only be accessed from it. We enforce
this by checking the content type. The possibility already exists that
an HTTP response could contain the wrong content type, and in that case,
we fail to decode the image, as there is no content sniffing support for
SVG. Thus there should be no additional risk taken by using the image IO
thread from other non-HTTP channels (if they don't specify the SVG
content type, it is not rendered today, and if they do, it will remain
on the main thread as it is today).
We also ignore data URIs. The specification requires that we process
these images sychronously. See bug 1325080 for details.
There are a number of operations with the surface cache which may result
in individual surfaces for a particular image cache to be removed. If an
image cache is emptied, and we are in factor of 2 mode, we should reset
it to the default mode, because we require at least one surface to be
available to determine the native/ideal size. Additionally, if the cache
is not locked, it should be removed entirely from the surface cache. We
handle this correctly in methods such as Lookup and LookupBestMatch, but
Prune and CollectSizeOfSurfaces can also cause this to happen, as
recently done in bug 1370412 and bug 1380649.
In order to let necko postpone the load of favicon, we have to set request context ID to the http channel that is created to load favicon.
This patch starts with passing a request context ID to nsContentUtils::LoadImage and makes other necessary changes to set the request context ID to the channel.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When we lookup a surface in the cache, we are careful to remove any
surfaces which were backed by volatile memory and got purged before we
could reacquire the buffer. We were not so careful in doing that when
generating memory reports. ISurfaceProvider::AddSizeOfExcludingThis will
cause us to acquire the buffer, and if it was purged, forget about its
purged status. Later when we performed a lookup, we would forget the
purged status, and assume we have the right data. This would appear as
completely transparent for BGRA surfaces, and completely black for BGRX
surfaces.
With this patch, we now properly remove purged surfaces instead of
including them in the report. This ensures that the cache state is
consistent. This also resolves memory reports of surfaces which reported
using no data -- they were purged when the report was generated.
Additionally, there was a bug in SurfaceCache::PruneImage where we did
not discard surfaces outside the module lock. Both PruneImage and
CollectSizeOfSurfaces now free any discarded surfaces outside the lock.
When the surface cache starts tracking an unlocked surface, it must
insert it into the expiration tracker, so that it can be freed later if
it is remains unused. ExpirationTrackerImpl::AddObjectLocked can fail
due to out-of-memory conditions or during shutdown, which we previously
ignored, and could leave us in a state where we think the surface is in
the tracker but is not. When we later try to mark the surface as used in
the tracker, it will hit a release assert because it doesn't exist. Now
we handle the insertion failure by discarding the surface. Marking the
surface as used can itself encounter a similar issue, and we handle it
the same way.
MozReview-Commit-ID: Kv6l0znnG48
An ImageSurfaceCache cannot enter factor-of-2 mode without a minimum
number of surfaces being present in its cache. However those surfaces
can be purged from the cache through various means (expire due to being
disuse, volatile buffers purged, etc). Also, it is entirely possible
that all the surfaces get purged, but the cache itself remains. Since
factor-of-2 mode requires at least one surface (to get the owning image
and its native size), we need to handle the case when the cache is
emptied appropriately. As such, we now reset the factor-of-2 mode (and
its pruned state) to the default (false) if we transition from non-empty
to empty.
MozReview-Commit-ID: EVaEqW59Asv
When SurfaceCache::Lookup is called to access surface data, it indicates
that the caller will not accept substitutes as in the case of
SurfaceCache::LookupBestMatch. As such, we need to be careful not to
remove those surfaces from our cache when pruning (in part 8b). This is
the marker used to track that, at some point, there was a caller which
got this surface that would accept no other (e.g. factor of 2 mode must
make an accept for this particular surface).
In bug 1383499 we fixed the case where on Android, animated images could
consume all of the available file handles. This is because each volatile
buffer will contain a file handle on Android, and animated images can
contain many, many frames. However in doing so we introduced a bug where
the stride of replacement surface was aligned to a 16-byte boundary. We
do not currently support any stride value but pixel size * image width
in the image decoding framework. This may be something we correct in the
future but for now, we should just ensure all surfaces follow the
expected stride value.
This is straightforward, with only two notable things.
- `#include "nsXPIDLString.h" is replaced with `#include "nsString.h"`
throughout, because all nsXPIDLString.h did was include nsString.h. The
exception is for files which already include nsString.h, in which case the
patch just removes the nsXPIDLString.h inclusion.
- The patch removes the |xpidl_string| gtest, but improves the |voided| test to
cover some of its ground, e.g. testing Adopt(nullptr).
--HG--
extra : rebase_source : 452cc4a08046a1adb1a8099a7e85a1917de5add8
We should not be declaring forward declarations for nsString classes directly,
instead we should use nsStringFwd.h. This will make changing the underlying
types easier.
--HG--
extra : rebase_source : b2c7554e8632f078167ff2f609392e63a136c299
These are all simple cases, with similarities to previous patches in this
series.
--HG--
extra : rebase_source : 6ef36382df9fef217d5cb737e218d65ac062f90a
These are all easy cases where an nsXPIDLCString local variable is set via
getter_Copies() and then is used in ways that rely on the implicit conversion
to |char*|. The patch uses get() and EqualsLiteral() calls to replace the
implicit conversions.
Animated image frames are never actually released to let the OS purge
them, because AnimationSurfaceProvider keeps them as RawAccessFrameRef.
Meanwhile, each volatile buffer on Android needs a file handle to be
retained for as long as the buffer lives. Given sufficient number of
animated image frames, we could easily exhaust the available file
handles (which in turn causes many problems). As such on Android we
should stick with the heap for animated image frames. On other platforms
it is better to stay because we will avoid a memset, because the OS will
zero-fill the requested pages on our behalf.
StreamingLexer::Clone should always succeed because we are merely
creating a new SourceBufferIterator which is at the same position as the
given iterator. However it is possible if there is no more data after,
the current position, it could return COMPLETE instead of READY.
This should not happen during the first Advance loop however. We handle
the failure gracefully now, and if someone files a report with the
invalid ICO file causing this problem, then we can investigate further.
This patch does the following.
- Moves nsWindowSizes from nsWindowMemoryReporter.h to its own file,
nsWindowSizes.h, so it can be included more widely without exposing
nsWindowMemoryReporter.
- Merges nsArenaMemoryStats.h (which defines nsTabSizes and nsArenaMemoryStats)
into nsWindowSizes.h.
- Renames nsArenaMemoryStats as nsArenaSizes, and nsWindowSizes::mArenaStats as
nsWindowSizes::mArenaSizes. This is the more usual naming scheme for such
types.
- Renames FRAME_ID_STAT_FIELD as NS_ARENA_SIZES_FIELD.
- Passes nsWindowSizes to PresShell::AddSizeOfIncludingThis() and
nsPresArena::AddSizeOfExcludingThis(), instead of a bunch of smaller things.
One nice consequence is that the odd nsArenaMemoryStats::mOther field is no
longer necessary, because we can update nsWindowSizes::mLayoutPresShellSize
directly in nsPresArena::AddSizeOfExcludingThis().
- Adds |const| to a few methods.
MozReview-Commit-ID: EpgFWKFqy7Y
When an animated image has been discarded, we avoided marking the
composited frame invalid unless it had been previously decoded. Most of
the time this was fine, but if the animated image was still decoding for
the first time, then we still had a composited frame lingering that we
did not mark as invalid. As a result, when we called
RasterImage::LookupFrame (and indirectly
FrameAnimator::GetCompositedFrame), it would always return the
composited frame. This meant that RasterImage::Decode would never be
called to trigger a redecode. At the same time,
FrameAnimator::RequestRefresh would not cause us to advance the frame
because the state was still discarded.
With this patch we separate out the concepts of "has ever requested to
be decoded" and "has ever completed decoding." The former is now used to
control whether or not a composited frame is marked as invalid after we
discover we currently have no surface for the animation -- this solves
the animation remaining frozen as we now request the redecode as
expected. The latter remains used to determine if we actually know the
total number of frames.
- Use displayPrePath in the pageInfo permissions that shows "Permissions for:"
- The extra displayPrePath method is necessary because it's difficult to compute it manually, as opposed to not having a displaySpecWithoutRef - as it's easy to get that by truncating displaySpec at the first '#' symbol.
MozReview-Commit-ID: 9RM5kQ2OqfC
A default constructed SurfacePipe contains a NullSurfaceSink as its
filter in mHead. This filter does nothing and is merely a placeholder.
Since most SurfacePipe objects are constructed with the default
constructor, and NullSurfaceSink has no (modified) state, we use a
singleton to represent it. Normally the SurfacePipe owns its filter, so
it needs to do a special check for NullSurfaceSink to ensure it doesn't
free it explicitly.
A Decoder object contains a default constructed SurfacePipe until it
needs to create the first frame from an image. This is a very brief
window because it does not take very long or much data to get to this
stage of decoding.
The NullSurfaceSink singleton is freed upon shutdown, however some
ISurfaceProvider objects may be lingering after this. If their Decoder
has yet to create the first frame, that means the SurfacePipe actually
contains a dangling pointer to the already freed singleton. To make
things worse, it actually tried to free the filter because it didn't
match the singleton (it got freed!).
As such, this change removes NullSurfaceSink entirely. We never use the
SurfacePipe before initializing it with a proper filter, and it would be
considered a programming error to do so. Instead let SurfacePipe::mHead
be null, and assert that it is not null when any operations are
performed on the SurfacePipe.
This mechanically replaces nsILocalFile with nsIFile in
*.js, *.jsm, *.sjs, *.html, *.xul, *.xml, and *.py.
MozReview-Commit-ID: 4ecl3RZhOwC
--HG--
extra : rebase_source : 412880ea27766118c38498d021331a3df6bccc70
nsIURI.originCharset had two use cases:
1) Dealing with the spec-incompliant feature of escapes in the hash
(reference) part of the URL.
2) For UI display of non-UTF-8 URLs.
For hash part handling, we use the document charset instead. For pretty
display of query strings on legacy-encoded pages, we no longer care to them
(see bug 817374 comment 18).
Also, the URL Standard has no concept of "origin charset". This patch
removes nsIURI.originCharset for reducing complexity and spec compliance.
MozReview-Commit-ID: 3tHd0VCWSqF
--HG--
extra : rebase_source : b2caa01f75e5dd26078a7679fd7caa319a65af14
We should use AutoTArray instead because telemetry shows that the vast
majority of images loaded (95%+) contain only a single chunk when
decoding finishes. Even if part 2 of this bug increases the number of
images loaded with multiple chunks, we still call SourceBuffer::Compact
after the decode is complete, which typically will reduce the number of
chunks to 1 (unless memory is very low and we fail to consolidate the
chunks). Thus it should be rare to contain more than 1 chunk on
anything but a temporary basis, and we can easily save the malloc
overhead.
Note that SourceBuffer::AppendChunk still uses the fallible variant of
nsTArray::AppendElement.
Currently SourceBuffer::ExpectLength will allocate a buffer which is a
multiple of MIN_CHUNK_CAPACITY (4096) bytes, no matter what the expected
size is. While it is true that HTTP servers can lie, and that we need to
handle that for legacy purposes, it is more likely the HTTP servers are
telling the truth when it comes to the content length. Additionally
images sourced from other locations, such as the file system or data
URIs, are always going to have the correct size information (barring a
bug elsewhere in the file system or our code). We should be able to
trust the size given as a good first guess.
While overallocating in general is a waste of memory,
SourceBuffer::Compact causes a far worse problem. After we have written
all of the data, and there are no active readers, we attempt to shrink
the allocated buffer(s) into a single contiguous chunk of the exact
length that we need (e.g. N allocations to 1, or 1 oversized allocation
to 1 perfect). Since we almost always overallocate, that means we almost
always trigger the logic in SourceBuffer::Compact to reallocate the data
into a properly sized buffer. If we had simply trusted the expected size
in the first place, we could have avoided this situation for the
majority of images.
In the case that we really do get the wrong size, then we will allocate
additional chunks which are multiples of MIN_CHUNK_CAPACITY bytes to fit
the data. At most, this will increase the number of discrete allocations
by 1, and trigger SourceBuffer::Compact to consolidate at the end. Since
we are almost always doing that before, and now we rarely do, this is a
significant win.
SourceBuffer::Compact attempts to consolidate multiple, discrete
allocations into a single buffer, as well as trim excess capacity from a
singular allocation if we set aside too much. Using realloc lets
jemalloc (or whatever heap implementation we have) decide which is
better -- growing the existing buffer if there is sufficient free memory
contiguous with the first chunk, or allocating a new buffer entirely.
Since we were going to copy regardless, this should result either in an
improvement or the status quo. Brief empirical testing on Linux suggests
somewhere from 1/3 to 1/2 of allocations resulted in reusing the same
data pointer (and presumably avoided a copy as a result). This also has
the advantage of potentially reducing OOM errors, as it may have enough
room to satisfy an expansion, but not an entirely new buffer.
All the SizeOf{In,Ex}cludingThis() functions take a MallocSizeOf function
which measures memory blocks. This patch introduces a new type, SizeOfState,
which includes a MallocSizeOf function *and* a table of already-measured
pointers, called SeenPtrs. This gives us a general mechanism to measure
graph-like data structures, by recording which nodes have already been
measured. (This approach is used in a number of existing reporters, but not in
a uniform fashion.)
The patch also converts the window memory reporting to use SizeOfState in a lot
of places, all the way through to the measurement of Elements. This is a
precursor for bug 1383977 which will measure Stylo elements, which involve
Arcs.
The patch also converts the existing mAlreadyMeasuredOrphanTrees table in the
OrphanReporter to use the new mechanism.
--HG--
extra : rebase_source : 2c23285f8b6c3b667560a9d14014efc4633aed51
This is similar like the previous patch, but for the 8-bit string variants.
Also, it changes assignment to Adopt() in GetCString() and GetDefaultCString()
to avoid an extra copy.
--HG--
extra : rebase_source : eba805c3a7b809d5ccd6e853b1c9010db9477667
The ICO decoder creates a cloned SourceBufferIterator for its own
SourceBuffer bounded by the resource size. This iterator is used by the
child decoder (PNG, BMP) for decoding the actual image. However we rely
upon the ICO decoder and its iterator to drive event loop, rather than
the child decoder and the cloned iterator. The cloned iterator knows how
many bytes it requires, but it is problematic to give it a consumer to
tell us when to resume without changes to StreamingLexer.
Without a consumer (IResumable), we won't have anything to notify when
we get the appropriate amount of data for the caller. If the caller
tries to advance after some, unknown amount of data has been written to
the SourceBuffer, then it may need to go back to waiting. Thus it should
only assert for a spurious wakeup if we have an actual consumer.
Thus far gtests have only tested fairly simple images which already
render the same on all platforms (e.g. solid green 100x100 square).
If we want to test more complicated images consistently across
platforms, we need to ensure the color adjustments we perform are
also consistent. Using the pref gfx.color_management.force_srgb to
force an sRGB CMS profile makes us consistent with the reftests and
mochitests.
However an additional quirk of the gtests is that we own the main
thread and we never check our event queue to see if anything is
pending. Depending on the initialization order of our graphics
dependencies, it may or may not have created pending runnables to
process the pref change. As such, we need to change the pref,
initialize imagelib/gfx and then check for, and if present execute,
any necessary runnables. Only then can we be sure that our desired
CMS profile is applied.
imgRequestProxy::IsOnEventTarget must return false in order for imgRequestProxy::Dispatch to be called. Typically we check for mListener before any of this but in imgRequest::OnLoadComplete, we have other things to do besides notifying the listener. As such, we want to dispatch even if there is no listener, and that is when the assert can fail. Since IsOnEventTarget can only return false if it has either a tab group *or* a listener, we can change the assert to match.
imgRequestProxy::SyncClone preserves the original behaviour of issuing
synchronous notifications once cloned. Some uses and tests depend on
this behaviour but in an ideal world, it would not be required.
imgRequestProxy::Clone is intended to be the replacement going forward,
which issues asynchronous notifications once cloned.