This is straightforward, with only two notable things.
- `#include "nsXPIDLString.h" is replaced with `#include "nsString.h"`
throughout, because all nsXPIDLString.h did was include nsString.h. The
exception is for files which already include nsString.h, in which case the
patch just removes the nsXPIDLString.h inclusion.
- The patch removes the |xpidl_string| gtest, but improves the |voided| test to
cover some of its ground, e.g. testing Adopt(nullptr).
--HG--
extra : rebase_source : 452cc4a08046a1adb1a8099a7e85a1917de5add8
We should not be declaring forward declarations for nsString classes directly,
instead we should use nsStringFwd.h. This will make changing the underlying
types easier.
--HG--
extra : rebase_source : b2c7554e8632f078167ff2f609392e63a136c299
These are all simple cases, with similarities to previous patches in this
series.
--HG--
extra : rebase_source : 6ef36382df9fef217d5cb737e218d65ac062f90a
These are all easy cases where an nsXPIDLCString local variable is set via
getter_Copies() and then is used in ways that rely on the implicit conversion
to |char*|. The patch uses get() and EqualsLiteral() calls to replace the
implicit conversions.
Animated image frames are never actually released to let the OS purge
them, because AnimationSurfaceProvider keeps them as RawAccessFrameRef.
Meanwhile, each volatile buffer on Android needs a file handle to be
retained for as long as the buffer lives. Given sufficient number of
animated image frames, we could easily exhaust the available file
handles (which in turn causes many problems). As such on Android we
should stick with the heap for animated image frames. On other platforms
it is better to stay because we will avoid a memset, because the OS will
zero-fill the requested pages on our behalf.
StreamingLexer::Clone should always succeed because we are merely
creating a new SourceBufferIterator which is at the same position as the
given iterator. However it is possible if there is no more data after,
the current position, it could return COMPLETE instead of READY.
This should not happen during the first Advance loop however. We handle
the failure gracefully now, and if someone files a report with the
invalid ICO file causing this problem, then we can investigate further.
This patch does the following.
- Moves nsWindowSizes from nsWindowMemoryReporter.h to its own file,
nsWindowSizes.h, so it can be included more widely without exposing
nsWindowMemoryReporter.
- Merges nsArenaMemoryStats.h (which defines nsTabSizes and nsArenaMemoryStats)
into nsWindowSizes.h.
- Renames nsArenaMemoryStats as nsArenaSizes, and nsWindowSizes::mArenaStats as
nsWindowSizes::mArenaSizes. This is the more usual naming scheme for such
types.
- Renames FRAME_ID_STAT_FIELD as NS_ARENA_SIZES_FIELD.
- Passes nsWindowSizes to PresShell::AddSizeOfIncludingThis() and
nsPresArena::AddSizeOfExcludingThis(), instead of a bunch of smaller things.
One nice consequence is that the odd nsArenaMemoryStats::mOther field is no
longer necessary, because we can update nsWindowSizes::mLayoutPresShellSize
directly in nsPresArena::AddSizeOfExcludingThis().
- Adds |const| to a few methods.
MozReview-Commit-ID: EpgFWKFqy7Y
When an animated image has been discarded, we avoided marking the
composited frame invalid unless it had been previously decoded. Most of
the time this was fine, but if the animated image was still decoding for
the first time, then we still had a composited frame lingering that we
did not mark as invalid. As a result, when we called
RasterImage::LookupFrame (and indirectly
FrameAnimator::GetCompositedFrame), it would always return the
composited frame. This meant that RasterImage::Decode would never be
called to trigger a redecode. At the same time,
FrameAnimator::RequestRefresh would not cause us to advance the frame
because the state was still discarded.
With this patch we separate out the concepts of "has ever requested to
be decoded" and "has ever completed decoding." The former is now used to
control whether or not a composited frame is marked as invalid after we
discover we currently have no surface for the animation -- this solves
the animation remaining frozen as we now request the redecode as
expected. The latter remains used to determine if we actually know the
total number of frames.
- Use displayPrePath in the pageInfo permissions that shows "Permissions for:"
- The extra displayPrePath method is necessary because it's difficult to compute it manually, as opposed to not having a displaySpecWithoutRef - as it's easy to get that by truncating displaySpec at the first '#' symbol.
MozReview-Commit-ID: 9RM5kQ2OqfC
A default constructed SurfacePipe contains a NullSurfaceSink as its
filter in mHead. This filter does nothing and is merely a placeholder.
Since most SurfacePipe objects are constructed with the default
constructor, and NullSurfaceSink has no (modified) state, we use a
singleton to represent it. Normally the SurfacePipe owns its filter, so
it needs to do a special check for NullSurfaceSink to ensure it doesn't
free it explicitly.
A Decoder object contains a default constructed SurfacePipe until it
needs to create the first frame from an image. This is a very brief
window because it does not take very long or much data to get to this
stage of decoding.
The NullSurfaceSink singleton is freed upon shutdown, however some
ISurfaceProvider objects may be lingering after this. If their Decoder
has yet to create the first frame, that means the SurfacePipe actually
contains a dangling pointer to the already freed singleton. To make
things worse, it actually tried to free the filter because it didn't
match the singleton (it got freed!).
As such, this change removes NullSurfaceSink entirely. We never use the
SurfacePipe before initializing it with a proper filter, and it would be
considered a programming error to do so. Instead let SurfacePipe::mHead
be null, and assert that it is not null when any operations are
performed on the SurfacePipe.
This mechanically replaces nsILocalFile with nsIFile in
*.js, *.jsm, *.sjs, *.html, *.xul, *.xml, and *.py.
MozReview-Commit-ID: 4ecl3RZhOwC
--HG--
extra : rebase_source : 412880ea27766118c38498d021331a3df6bccc70
nsIURI.originCharset had two use cases:
1) Dealing with the spec-incompliant feature of escapes in the hash
(reference) part of the URL.
2) For UI display of non-UTF-8 URLs.
For hash part handling, we use the document charset instead. For pretty
display of query strings on legacy-encoded pages, we no longer care to them
(see bug 817374 comment 18).
Also, the URL Standard has no concept of "origin charset". This patch
removes nsIURI.originCharset for reducing complexity and spec compliance.
MozReview-Commit-ID: 3tHd0VCWSqF
--HG--
extra : rebase_source : b2caa01f75e5dd26078a7679fd7caa319a65af14
We should use AutoTArray instead because telemetry shows that the vast
majority of images loaded (95%+) contain only a single chunk when
decoding finishes. Even if part 2 of this bug increases the number of
images loaded with multiple chunks, we still call SourceBuffer::Compact
after the decode is complete, which typically will reduce the number of
chunks to 1 (unless memory is very low and we fail to consolidate the
chunks). Thus it should be rare to contain more than 1 chunk on
anything but a temporary basis, and we can easily save the malloc
overhead.
Note that SourceBuffer::AppendChunk still uses the fallible variant of
nsTArray::AppendElement.
Currently SourceBuffer::ExpectLength will allocate a buffer which is a
multiple of MIN_CHUNK_CAPACITY (4096) bytes, no matter what the expected
size is. While it is true that HTTP servers can lie, and that we need to
handle that for legacy purposes, it is more likely the HTTP servers are
telling the truth when it comes to the content length. Additionally
images sourced from other locations, such as the file system or data
URIs, are always going to have the correct size information (barring a
bug elsewhere in the file system or our code). We should be able to
trust the size given as a good first guess.
While overallocating in general is a waste of memory,
SourceBuffer::Compact causes a far worse problem. After we have written
all of the data, and there are no active readers, we attempt to shrink
the allocated buffer(s) into a single contiguous chunk of the exact
length that we need (e.g. N allocations to 1, or 1 oversized allocation
to 1 perfect). Since we almost always overallocate, that means we almost
always trigger the logic in SourceBuffer::Compact to reallocate the data
into a properly sized buffer. If we had simply trusted the expected size
in the first place, we could have avoided this situation for the
majority of images.
In the case that we really do get the wrong size, then we will allocate
additional chunks which are multiples of MIN_CHUNK_CAPACITY bytes to fit
the data. At most, this will increase the number of discrete allocations
by 1, and trigger SourceBuffer::Compact to consolidate at the end. Since
we are almost always doing that before, and now we rarely do, this is a
significant win.