Crash reports indicate that SourceBuffer::mStatus is not set, and thus
SourceBuffer::AppendFromInputStream crashes due to dereferencing an
invalid Maybe<nsresult> object. Since SourceBuffer::Append cannot fail
without mStatus being set (or already set), it must mean that the input
stream failed to read all the data, and swallowed any internal errors.
While we used to assert in this situation, we also silently swallowed
the error historically. This patch will check mStatus, but if it is
unavailable, it will assert like before, and silently return otherwise.
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
Regardless of the size of an encoded image, SourceBuffer::Compact would
try to consolidate all of the chunks into a single chunk. If an image is
quite large, it can be actively harmful to do this, because we want a
very large contiguous chunk of memory for no real reason, and spend
extra time on the main thread doing the memcpy/consolidation.
Instead we now cap out the chunk size at 20MB. If we start allocating
chunks of this size, we will not perform compacting when we have
received all of the data. (Save for realloc'ing the last chunk since it
probably isn't full.)
On a related note, if we hit an out-of-memory condition in the middle of
appending data to the SourceBuffer, we would swallow the error. This is
because nsIInputStream::ReadSegments will succeed if any data was
written. This leaves the SourceBuffer out of sync. We now propogate this
error up properly to the higher levels.
fixup
Currently SourceBuffer::ExpectLength will allocate a buffer which is a
multiple of MIN_CHUNK_CAPACITY (4096) bytes, no matter what the expected
size is. While it is true that HTTP servers can lie, and that we need to
handle that for legacy purposes, it is more likely the HTTP servers are
telling the truth when it comes to the content length. Additionally
images sourced from other locations, such as the file system or data
URIs, are always going to have the correct size information (barring a
bug elsewhere in the file system or our code). We should be able to
trust the size given as a good first guess.
While overallocating in general is a waste of memory,
SourceBuffer::Compact causes a far worse problem. After we have written
all of the data, and there are no active readers, we attempt to shrink
the allocated buffer(s) into a single contiguous chunk of the exact
length that we need (e.g. N allocations to 1, or 1 oversized allocation
to 1 perfect). Since we almost always overallocate, that means we almost
always trigger the logic in SourceBuffer::Compact to reallocate the data
into a properly sized buffer. If we had simply trusted the expected size
in the first place, we could have avoided this situation for the
majority of images.
In the case that we really do get the wrong size, then we will allocate
additional chunks which are multiples of MIN_CHUNK_CAPACITY bytes to fit
the data. At most, this will increase the number of discrete allocations
by 1, and trigger SourceBuffer::Compact to consolidate at the end. Since
we are almost always doing that before, and now we rarely do, this is a
significant win.
SourceBuffer::Compact attempts to consolidate multiple, discrete
allocations into a single buffer, as well as trim excess capacity from a
singular allocation if we set aside too much. Using realloc lets
jemalloc (or whatever heap implementation we have) decide which is
better -- growing the existing buffer if there is sufficient free memory
contiguous with the first chunk, or allocating a new buffer entirely.
Since we were going to copy regardless, this should result either in an
improvement or the status quo. Brief empirical testing on Linux suggests
somewhere from 1/3 to 1/2 of allocations resulted in reusing the same
data pointer (and presumably avoided a copy as a result). This also has
the advantage of potentially reducing OOM errors, as it may have enough
room to satisfy an expansion, but not an entirely new buffer.
The ICO decoder creates a cloned SourceBufferIterator for its own
SourceBuffer bounded by the resource size. This iterator is used by the
child decoder (PNG, BMP) for decoding the actual image. However we rely
upon the ICO decoder and its iterator to drive event loop, rather than
the child decoder and the cloned iterator. The cloned iterator knows how
many bytes it requires, but it is problematic to give it a consumer to
tell us when to resume without changes to StreamingLexer.
Without a consumer (IResumable), we won't have anything to notify when
we get the appropriate amount of data for the caller. If the caller
tries to advance after some, unknown amount of data has been written to
the SourceBuffer, then it may need to go back to waiting. Thus it should
only assert for a spurious wakeup if we have an actual consumer.
Slightly less than half (93 / 210) of the NS_METHOD instances in the codebase
are because of the use of NS_CALLBACK in
nsI{Input,Output,UnicharInput},Stream.idl. The use of __stdcall on Win32 isn't
important for these callbacks because they are only used as arguments to
[noscript] methods.
This patch converts them to vanilla |nsresult| functions. It increases the size
of xul.dll by about ~600 bytes, which is about 0.001%.
--HG--
extra : rebase_source : c15d85298e0975fd030cd8f8f8e54501f453959b
This makes it clearer that, unlike how SizeOf*() functions usually work, this
doesn't measure any children hanging off the array.
And do likewise for nsTObserverArray.
--HG--
extra : rebase_source : 6a8c8d8ffb53ad51b5773afea77126cdd767f149