This mechanically replaces nsILocalFile with nsIFile in
*.js, *.jsm, *.sjs, *.html, *.xul, *.xml, and *.py.
MozReview-Commit-ID: 4ecl3RZhOwC
--HG--
extra : rebase_source : 412880ea27766118c38498d021331a3df6bccc70
nsIURI.originCharset had two use cases:
1) Dealing with the spec-incompliant feature of escapes in the hash
(reference) part of the URL.
2) For UI display of non-UTF-8 URLs.
For hash part handling, we use the document charset instead. For pretty
display of query strings on legacy-encoded pages, we no longer care to them
(see bug 817374 comment 18).
Also, the URL Standard has no concept of "origin charset". This patch
removes nsIURI.originCharset for reducing complexity and spec compliance.
MozReview-Commit-ID: 3tHd0VCWSqF
--HG--
extra : rebase_source : b2caa01f75e5dd26078a7679fd7caa319a65af14
We should use AutoTArray instead because telemetry shows that the vast
majority of images loaded (95%+) contain only a single chunk when
decoding finishes. Even if part 2 of this bug increases the number of
images loaded with multiple chunks, we still call SourceBuffer::Compact
after the decode is complete, which typically will reduce the number of
chunks to 1 (unless memory is very low and we fail to consolidate the
chunks). Thus it should be rare to contain more than 1 chunk on
anything but a temporary basis, and we can easily save the malloc
overhead.
Note that SourceBuffer::AppendChunk still uses the fallible variant of
nsTArray::AppendElement.
Currently SourceBuffer::ExpectLength will allocate a buffer which is a
multiple of MIN_CHUNK_CAPACITY (4096) bytes, no matter what the expected
size is. While it is true that HTTP servers can lie, and that we need to
handle that for legacy purposes, it is more likely the HTTP servers are
telling the truth when it comes to the content length. Additionally
images sourced from other locations, such as the file system or data
URIs, are always going to have the correct size information (barring a
bug elsewhere in the file system or our code). We should be able to
trust the size given as a good first guess.
While overallocating in general is a waste of memory,
SourceBuffer::Compact causes a far worse problem. After we have written
all of the data, and there are no active readers, we attempt to shrink
the allocated buffer(s) into a single contiguous chunk of the exact
length that we need (e.g. N allocations to 1, or 1 oversized allocation
to 1 perfect). Since we almost always overallocate, that means we almost
always trigger the logic in SourceBuffer::Compact to reallocate the data
into a properly sized buffer. If we had simply trusted the expected size
in the first place, we could have avoided this situation for the
majority of images.
In the case that we really do get the wrong size, then we will allocate
additional chunks which are multiples of MIN_CHUNK_CAPACITY bytes to fit
the data. At most, this will increase the number of discrete allocations
by 1, and trigger SourceBuffer::Compact to consolidate at the end. Since
we are almost always doing that before, and now we rarely do, this is a
significant win.
SourceBuffer::Compact attempts to consolidate multiple, discrete
allocations into a single buffer, as well as trim excess capacity from a
singular allocation if we set aside too much. Using realloc lets
jemalloc (or whatever heap implementation we have) decide which is
better -- growing the existing buffer if there is sufficient free memory
contiguous with the first chunk, or allocating a new buffer entirely.
Since we were going to copy regardless, this should result either in an
improvement or the status quo. Brief empirical testing on Linux suggests
somewhere from 1/3 to 1/2 of allocations resulted in reusing the same
data pointer (and presumably avoided a copy as a result). This also has
the advantage of potentially reducing OOM errors, as it may have enough
room to satisfy an expansion, but not an entirely new buffer.
All the SizeOf{In,Ex}cludingThis() functions take a MallocSizeOf function
which measures memory blocks. This patch introduces a new type, SizeOfState,
which includes a MallocSizeOf function *and* a table of already-measured
pointers, called SeenPtrs. This gives us a general mechanism to measure
graph-like data structures, by recording which nodes have already been
measured. (This approach is used in a number of existing reporters, but not in
a uniform fashion.)
The patch also converts the window memory reporting to use SizeOfState in a lot
of places, all the way through to the measurement of Elements. This is a
precursor for bug 1383977 which will measure Stylo elements, which involve
Arcs.
The patch also converts the existing mAlreadyMeasuredOrphanTrees table in the
OrphanReporter to use the new mechanism.
--HG--
extra : rebase_source : 2c23285f8b6c3b667560a9d14014efc4633aed51
This is similar like the previous patch, but for the 8-bit string variants.
Also, it changes assignment to Adopt() in GetCString() and GetDefaultCString()
to avoid an extra copy.
--HG--
extra : rebase_source : eba805c3a7b809d5ccd6e853b1c9010db9477667
The ICO decoder creates a cloned SourceBufferIterator for its own
SourceBuffer bounded by the resource size. This iterator is used by the
child decoder (PNG, BMP) for decoding the actual image. However we rely
upon the ICO decoder and its iterator to drive event loop, rather than
the child decoder and the cloned iterator. The cloned iterator knows how
many bytes it requires, but it is problematic to give it a consumer to
tell us when to resume without changes to StreamingLexer.
Without a consumer (IResumable), we won't have anything to notify when
we get the appropriate amount of data for the caller. If the caller
tries to advance after some, unknown amount of data has been written to
the SourceBuffer, then it may need to go back to waiting. Thus it should
only assert for a spurious wakeup if we have an actual consumer.
Thus far gtests have only tested fairly simple images which already
render the same on all platforms (e.g. solid green 100x100 square).
If we want to test more complicated images consistently across
platforms, we need to ensure the color adjustments we perform are
also consistent. Using the pref gfx.color_management.force_srgb to
force an sRGB CMS profile makes us consistent with the reftests and
mochitests.
However an additional quirk of the gtests is that we own the main
thread and we never check our event queue to see if anything is
pending. Depending on the initialization order of our graphics
dependencies, it may or may not have created pending runnables to
process the pref change. As such, we need to change the pref,
initialize imagelib/gfx and then check for, and if present execute,
any necessary runnables. Only then can we be sure that our desired
CMS profile is applied.
imgRequestProxy::IsOnEventTarget must return false in order for imgRequestProxy::Dispatch to be called. Typically we check for mListener before any of this but in imgRequest::OnLoadComplete, we have other things to do besides notifying the listener. As such, we want to dispatch even if there is no listener, and that is when the assert can fail. Since IsOnEventTarget can only return false if it has either a tab group *or* a listener, we can change the assert to match.