This patch does the following.
- Moves nsWindowSizes from nsWindowMemoryReporter.h to its own file,
nsWindowSizes.h, so it can be included more widely without exposing
nsWindowMemoryReporter.
- Merges nsArenaMemoryStats.h (which defines nsTabSizes and nsArenaMemoryStats)
into nsWindowSizes.h.
- Renames nsArenaMemoryStats as nsArenaSizes, and nsWindowSizes::mArenaStats as
nsWindowSizes::mArenaSizes. This is the more usual naming scheme for such
types.
- Renames FRAME_ID_STAT_FIELD as NS_ARENA_SIZES_FIELD.
- Passes nsWindowSizes to PresShell::AddSizeOfIncludingThis() and
nsPresArena::AddSizeOfExcludingThis(), instead of a bunch of smaller things.
One nice consequence is that the odd nsArenaMemoryStats::mOther field is no
longer necessary, because we can update nsWindowSizes::mLayoutPresShellSize
directly in nsPresArena::AddSizeOfExcludingThis().
- Adds |const| to a few methods.
MozReview-Commit-ID: EpgFWKFqy7Y
When an animated image has been discarded, we avoided marking the
composited frame invalid unless it had been previously decoded. Most of
the time this was fine, but if the animated image was still decoding for
the first time, then we still had a composited frame lingering that we
did not mark as invalid. As a result, when we called
RasterImage::LookupFrame (and indirectly
FrameAnimator::GetCompositedFrame), it would always return the
composited frame. This meant that RasterImage::Decode would never be
called to trigger a redecode. At the same time,
FrameAnimator::RequestRefresh would not cause us to advance the frame
because the state was still discarded.
With this patch we separate out the concepts of "has ever requested to
be decoded" and "has ever completed decoding." The former is now used to
control whether or not a composited frame is marked as invalid after we
discover we currently have no surface for the animation -- this solves
the animation remaining frozen as we now request the redecode as
expected. The latter remains used to determine if we actually know the
total number of frames.
- Use displayPrePath in the pageInfo permissions that shows "Permissions for:"
- The extra displayPrePath method is necessary because it's difficult to compute it manually, as opposed to not having a displaySpecWithoutRef - as it's easy to get that by truncating displaySpec at the first '#' symbol.
MozReview-Commit-ID: 9RM5kQ2OqfC
A default constructed SurfacePipe contains a NullSurfaceSink as its
filter in mHead. This filter does nothing and is merely a placeholder.
Since most SurfacePipe objects are constructed with the default
constructor, and NullSurfaceSink has no (modified) state, we use a
singleton to represent it. Normally the SurfacePipe owns its filter, so
it needs to do a special check for NullSurfaceSink to ensure it doesn't
free it explicitly.
A Decoder object contains a default constructed SurfacePipe until it
needs to create the first frame from an image. This is a very brief
window because it does not take very long or much data to get to this
stage of decoding.
The NullSurfaceSink singleton is freed upon shutdown, however some
ISurfaceProvider objects may be lingering after this. If their Decoder
has yet to create the first frame, that means the SurfacePipe actually
contains a dangling pointer to the already freed singleton. To make
things worse, it actually tried to free the filter because it didn't
match the singleton (it got freed!).
As such, this change removes NullSurfaceSink entirely. We never use the
SurfacePipe before initializing it with a proper filter, and it would be
considered a programming error to do so. Instead let SurfacePipe::mHead
be null, and assert that it is not null when any operations are
performed on the SurfacePipe.
This mechanically replaces nsILocalFile with nsIFile in
*.js, *.jsm, *.sjs, *.html, *.xul, *.xml, and *.py.
MozReview-Commit-ID: 4ecl3RZhOwC
--HG--
extra : rebase_source : 412880ea27766118c38498d021331a3df6bccc70
nsIURI.originCharset had two use cases:
1) Dealing with the spec-incompliant feature of escapes in the hash
(reference) part of the URL.
2) For UI display of non-UTF-8 URLs.
For hash part handling, we use the document charset instead. For pretty
display of query strings on legacy-encoded pages, we no longer care to them
(see bug 817374 comment 18).
Also, the URL Standard has no concept of "origin charset". This patch
removes nsIURI.originCharset for reducing complexity and spec compliance.
MozReview-Commit-ID: 3tHd0VCWSqF
--HG--
extra : rebase_source : b2caa01f75e5dd26078a7679fd7caa319a65af14
We should use AutoTArray instead because telemetry shows that the vast
majority of images loaded (95%+) contain only a single chunk when
decoding finishes. Even if part 2 of this bug increases the number of
images loaded with multiple chunks, we still call SourceBuffer::Compact
after the decode is complete, which typically will reduce the number of
chunks to 1 (unless memory is very low and we fail to consolidate the
chunks). Thus it should be rare to contain more than 1 chunk on
anything but a temporary basis, and we can easily save the malloc
overhead.
Note that SourceBuffer::AppendChunk still uses the fallible variant of
nsTArray::AppendElement.
Currently SourceBuffer::ExpectLength will allocate a buffer which is a
multiple of MIN_CHUNK_CAPACITY (4096) bytes, no matter what the expected
size is. While it is true that HTTP servers can lie, and that we need to
handle that for legacy purposes, it is more likely the HTTP servers are
telling the truth when it comes to the content length. Additionally
images sourced from other locations, such as the file system or data
URIs, are always going to have the correct size information (barring a
bug elsewhere in the file system or our code). We should be able to
trust the size given as a good first guess.
While overallocating in general is a waste of memory,
SourceBuffer::Compact causes a far worse problem. After we have written
all of the data, and there are no active readers, we attempt to shrink
the allocated buffer(s) into a single contiguous chunk of the exact
length that we need (e.g. N allocations to 1, or 1 oversized allocation
to 1 perfect). Since we almost always overallocate, that means we almost
always trigger the logic in SourceBuffer::Compact to reallocate the data
into a properly sized buffer. If we had simply trusted the expected size
in the first place, we could have avoided this situation for the
majority of images.
In the case that we really do get the wrong size, then we will allocate
additional chunks which are multiples of MIN_CHUNK_CAPACITY bytes to fit
the data. At most, this will increase the number of discrete allocations
by 1, and trigger SourceBuffer::Compact to consolidate at the end. Since
we are almost always doing that before, and now we rarely do, this is a
significant win.
SourceBuffer::Compact attempts to consolidate multiple, discrete
allocations into a single buffer, as well as trim excess capacity from a
singular allocation if we set aside too much. Using realloc lets
jemalloc (or whatever heap implementation we have) decide which is
better -- growing the existing buffer if there is sufficient free memory
contiguous with the first chunk, or allocating a new buffer entirely.
Since we were going to copy regardless, this should result either in an
improvement or the status quo. Brief empirical testing on Linux suggests
somewhere from 1/3 to 1/2 of allocations resulted in reusing the same
data pointer (and presumably avoided a copy as a result). This also has
the advantage of potentially reducing OOM errors, as it may have enough
room to satisfy an expansion, but not an entirely new buffer.
All the SizeOf{In,Ex}cludingThis() functions take a MallocSizeOf function
which measures memory blocks. This patch introduces a new type, SizeOfState,
which includes a MallocSizeOf function *and* a table of already-measured
pointers, called SeenPtrs. This gives us a general mechanism to measure
graph-like data structures, by recording which nodes have already been
measured. (This approach is used in a number of existing reporters, but not in
a uniform fashion.)
The patch also converts the window memory reporting to use SizeOfState in a lot
of places, all the way through to the measurement of Elements. This is a
precursor for bug 1383977 which will measure Stylo elements, which involve
Arcs.
The patch also converts the existing mAlreadyMeasuredOrphanTrees table in the
OrphanReporter to use the new mechanism.
--HG--
extra : rebase_source : 2c23285f8b6c3b667560a9d14014efc4633aed51
This is similar like the previous patch, but for the 8-bit string variants.
Also, it changes assignment to Adopt() in GetCString() and GetDefaultCString()
to avoid an extra copy.
--HG--
extra : rebase_source : eba805c3a7b809d5ccd6e853b1c9010db9477667
The ICO decoder creates a cloned SourceBufferIterator for its own
SourceBuffer bounded by the resource size. This iterator is used by the
child decoder (PNG, BMP) for decoding the actual image. However we rely
upon the ICO decoder and its iterator to drive event loop, rather than
the child decoder and the cloned iterator. The cloned iterator knows how
many bytes it requires, but it is problematic to give it a consumer to
tell us when to resume without changes to StreamingLexer.
Without a consumer (IResumable), we won't have anything to notify when
we get the appropriate amount of data for the caller. If the caller
tries to advance after some, unknown amount of data has been written to
the SourceBuffer, then it may need to go back to waiting. Thus it should
only assert for a spurious wakeup if we have an actual consumer.
Thus far gtests have only tested fairly simple images which already
render the same on all platforms (e.g. solid green 100x100 square).
If we want to test more complicated images consistently across
platforms, we need to ensure the color adjustments we perform are
also consistent. Using the pref gfx.color_management.force_srgb to
force an sRGB CMS profile makes us consistent with the reftests and
mochitests.
However an additional quirk of the gtests is that we own the main
thread and we never check our event queue to see if anything is
pending. Depending on the initialization order of our graphics
dependencies, it may or may not have created pending runnables to
process the pref change. As such, we need to change the pref,
initialize imagelib/gfx and then check for, and if present execute,
any necessary runnables. Only then can we be sure that our desired
CMS profile is applied.