This mechanically replaces nsILocalFile with nsIFile in
*.js, *.jsm, *.sjs, *.html, *.xul, *.xml, and *.py.
MozReview-Commit-ID: 4ecl3RZhOwC
--HG--
extra : rebase_source : 412880ea27766118c38498d021331a3df6bccc70
nsIURI.originCharset had two use cases:
1) Dealing with the spec-incompliant feature of escapes in the hash
(reference) part of the URL.
2) For UI display of non-UTF-8 URLs.
For hash part handling, we use the document charset instead. For pretty
display of query strings on legacy-encoded pages, we no longer care to them
(see bug 817374 comment 18).
Also, the URL Standard has no concept of "origin charset". This patch
removes nsIURI.originCharset for reducing complexity and spec compliance.
MozReview-Commit-ID: 3tHd0VCWSqF
--HG--
extra : rebase_source : b2caa01f75e5dd26078a7679fd7caa319a65af14
We should use AutoTArray instead because telemetry shows that the vast
majority of images loaded (95%+) contain only a single chunk when
decoding finishes. Even if part 2 of this bug increases the number of
images loaded with multiple chunks, we still call SourceBuffer::Compact
after the decode is complete, which typically will reduce the number of
chunks to 1 (unless memory is very low and we fail to consolidate the
chunks). Thus it should be rare to contain more than 1 chunk on
anything but a temporary basis, and we can easily save the malloc
overhead.
Note that SourceBuffer::AppendChunk still uses the fallible variant of
nsTArray::AppendElement.
Currently SourceBuffer::ExpectLength will allocate a buffer which is a
multiple of MIN_CHUNK_CAPACITY (4096) bytes, no matter what the expected
size is. While it is true that HTTP servers can lie, and that we need to
handle that for legacy purposes, it is more likely the HTTP servers are
telling the truth when it comes to the content length. Additionally
images sourced from other locations, such as the file system or data
URIs, are always going to have the correct size information (barring a
bug elsewhere in the file system or our code). We should be able to
trust the size given as a good first guess.
While overallocating in general is a waste of memory,
SourceBuffer::Compact causes a far worse problem. After we have written
all of the data, and there are no active readers, we attempt to shrink
the allocated buffer(s) into a single contiguous chunk of the exact
length that we need (e.g. N allocations to 1, or 1 oversized allocation
to 1 perfect). Since we almost always overallocate, that means we almost
always trigger the logic in SourceBuffer::Compact to reallocate the data
into a properly sized buffer. If we had simply trusted the expected size
in the first place, we could have avoided this situation for the
majority of images.
In the case that we really do get the wrong size, then we will allocate
additional chunks which are multiples of MIN_CHUNK_CAPACITY bytes to fit
the data. At most, this will increase the number of discrete allocations
by 1, and trigger SourceBuffer::Compact to consolidate at the end. Since
we are almost always doing that before, and now we rarely do, this is a
significant win.
SourceBuffer::Compact attempts to consolidate multiple, discrete
allocations into a single buffer, as well as trim excess capacity from a
singular allocation if we set aside too much. Using realloc lets
jemalloc (or whatever heap implementation we have) decide which is
better -- growing the existing buffer if there is sufficient free memory
contiguous with the first chunk, or allocating a new buffer entirely.
Since we were going to copy regardless, this should result either in an
improvement or the status quo. Brief empirical testing on Linux suggests
somewhere from 1/3 to 1/2 of allocations resulted in reusing the same
data pointer (and presumably avoided a copy as a result). This also has
the advantage of potentially reducing OOM errors, as it may have enough
room to satisfy an expansion, but not an entirely new buffer.
All the SizeOf{In,Ex}cludingThis() functions take a MallocSizeOf function
which measures memory blocks. This patch introduces a new type, SizeOfState,
which includes a MallocSizeOf function *and* a table of already-measured
pointers, called SeenPtrs. This gives us a general mechanism to measure
graph-like data structures, by recording which nodes have already been
measured. (This approach is used in a number of existing reporters, but not in
a uniform fashion.)
The patch also converts the window memory reporting to use SizeOfState in a lot
of places, all the way through to the measurement of Elements. This is a
precursor for bug 1383977 which will measure Stylo elements, which involve
Arcs.
The patch also converts the existing mAlreadyMeasuredOrphanTrees table in the
OrphanReporter to use the new mechanism.
--HG--
extra : rebase_source : 2c23285f8b6c3b667560a9d14014efc4633aed51
This is similar like the previous patch, but for the 8-bit string variants.
Also, it changes assignment to Adopt() in GetCString() and GetDefaultCString()
to avoid an extra copy.
--HG--
extra : rebase_source : eba805c3a7b809d5ccd6e853b1c9010db9477667
The ICO decoder creates a cloned SourceBufferIterator for its own
SourceBuffer bounded by the resource size. This iterator is used by the
child decoder (PNG, BMP) for decoding the actual image. However we rely
upon the ICO decoder and its iterator to drive event loop, rather than
the child decoder and the cloned iterator. The cloned iterator knows how
many bytes it requires, but it is problematic to give it a consumer to
tell us when to resume without changes to StreamingLexer.
Without a consumer (IResumable), we won't have anything to notify when
we get the appropriate amount of data for the caller. If the caller
tries to advance after some, unknown amount of data has been written to
the SourceBuffer, then it may need to go back to waiting. Thus it should
only assert for a spurious wakeup if we have an actual consumer.
Thus far gtests have only tested fairly simple images which already
render the same on all platforms (e.g. solid green 100x100 square).
If we want to test more complicated images consistently across
platforms, we need to ensure the color adjustments we perform are
also consistent. Using the pref gfx.color_management.force_srgb to
force an sRGB CMS profile makes us consistent with the reftests and
mochitests.
However an additional quirk of the gtests is that we own the main
thread and we never check our event queue to see if anything is
pending. Depending on the initialization order of our graphics
dependencies, it may or may not have created pending runnables to
process the pref change. As such, we need to change the pref,
initialize imagelib/gfx and then check for, and if present execute,
any necessary runnables. Only then can we be sure that our desired
CMS profile is applied.
imgRequestProxy::IsOnEventTarget must return false in order for imgRequestProxy::Dispatch to be called. Typically we check for mListener before any of this but in imgRequest::OnLoadComplete, we have other things to do besides notifying the listener. As such, we want to dispatch even if there is no listener, and that is when the assert can fail. Since IsOnEventTarget can only return false if it has either a tab group *or* a listener, we can change the assert to match.
imgRequestProxy::SyncClone preserves the original behaviour of issuing
synchronous notifications once cloned. Some uses and tests depend on
this behaviour but in an ideal world, it would not be required.
imgRequestProxy::Clone is intended to be the replacement going forward,
which issues asynchronous notifications once cloned.
* nsStandardURL::GetHost/GetHostPort/GetSpec contain an punycode encoded hostname.
* Added nsIURI::GetDisplayHost/GetDisplayHostPort/GetDisplaySpec which have unicode hostnames, depending on the hostname, character blacklist and the network.IDN_show_punycode pref
* remove mHostEncoding since it's not needed anymore (the hostname is always ASCII encoded)
* Add mCheckedIfHostA to know when GetDisplayHost can return the regular host, or when we need to use the cached mDisplayHost
MozReview-Commit-ID: 4qV9Ynhr2Jl
* * *
Bug 945240 - Make sure nsIURI.specIgnoringRef/.getSensitiveInfoHiddenSpec/.prePath contain unicode hosts when network.standard-url.punycode-host is set to false r=mcmanus
MozReview-Commit-ID: F6bZuHOWEsj
--HG--
extra : rebase_source : d8ae8bf774eb22b549370ca96565bafc930faf51
One thing to note here is that the Scale function on gfxRect has a
different implementation than that in gfx::Rect which is replacing it.
The former just scales the width/height directly whereas the latter
scales the XMost/YMost and recomputes the width/height.
MozReview-Commit-ID: 5FImdIaNfC3
--HG--
extra : rebase_source : 98662d2a52ff9652ec60b066641a07c6d5ee8e08
Most of this patch is updating a few places that use gfxMatrix to use
the equivalent-but-differently-named functions on MatrixDouble:
- Translate/Rotate/Scale get turned into PreTranslate/PreRotate/PreScale
- Transform(Point) gets turned into TransformPoint(Point)
- gfxMatrix::TransformBounds(gfxRect) gets turned into
gfxRect::TransformBoundsBy(gfxMatrix).
- gfxMatrix::Transform(gfxRect) gets turned into
gfxRect::TransformBy(gfxMatrix).
The last two functions are added in this patch as convenience wrappers
to gfxRect instead of Matrix.h because we don't want Matrix.h to "know"
about gfxRect (to avoid adding gecko dependencies on Moz2D). Once we
turn gfxRect into a typedef for RectDouble these will be eliminated
anyway.
MozReview-Commit-ID: BnOjHzmOSKn
--HG--
extra : rebase_source : cf1692d1f0d44a4b05d684a66678739181a426d5
It's silly to use prmem.h within Firefox code given that in our configuration
its functions are just wrappers for malloc() et al. (Indeed, in some places we
mix PR_Malloc() with free(), or malloc() with PR_Free().)
This patch removes all uses, except for the places where we need to use
PR_Free() to free something allocated by another NSPR function; in those cases
I've added a comment explaining which function did the allocation.
--HG--
extra : rebase_source : 0f781bca68b5bf3c4c191e09e277dfc8becffa09
We currently use RasterImage::IsUnlocked for two different purposes:
1) to determine that we can't throw away the decoded image in WillDrawOpaqueNow
2) to determine when to send the unlockeddraw notification
For 1) what we want to check is mLockCount == 0.
For 2) what we actually want to check is mAnimationConsumers == 0. This is because images that are in the visible list in background tabs will have mLockCount == 0 but mAnimationConsumers > 0 and if we are drawing an image we need to make sure it will be animated (mAnimationConsumers == 0 stops the animation). This is what VectorImage already does.
Most of the changes in this patch are just using the explicit
constructor from gfx::IntSize to gfx::Size, since gfxSize did
that implicitly but gfx::Size doesn't.
MozReview-Commit-ID: CzikGjHEXje
--HG--
extra : rebase_source : 9d19977f2a774d9a2a653db923553a6c2e06f82a
This patch enables startupRecorder.js to collect data on
loaded and shown raster and SVG images on startup via events
from native code. It also adds a test that uses this data
to find images that are unnecessarily loaded.
I've not fixed any of the affected images yet, there's a
fairly comprehensive whitelist that I want to gradually
decrease by opening bugs in the respective components.
MozReview-Commit-ID: 9KqQvKLtZhu
--HG--
extra : rebase_source : 856f06320c78ed88c4578fce985b2a526566e825
This patch enables startupRecorder.js to collect data on
loaded and shown raster and SVG images on startup via events
from native code. It also adds a test that uses this data
to find images that are unnecessarily loaded.
I've not fixed any of the affected images yet, there's a
fairly comprehensive whitelist that I want to gradually
decrease by opening bugs in the respective components.
MozReview-Commit-ID: 9KqQvKLtZhu
--HG--
extra : rebase_source : 5f75fcd1152f569a5b48e21d4e4821a24f768ecd
This patch makes the following changes to the macros.
- Removes PROFILER_LABEL_FUNC. It's only suitable for use in functions outside
classes, due to PROFILER_FUNCTION_NAME not getting class names, and it was
mostly misused.
- Removes PROFILER_FUNCTION_NAME. It's no longer used, and __func__ is
universally available now anyway.
- Combines the first two string literal arguments of PROFILER_LABEL and
PROFILER_LABEL_DYNAMIC into a single argument. There was no good reason for
them to be separate, and it forced a '::' in the label, which isn't always
appropriate. Also, the meaning of the "name_space" argument was interpreted
in an interesting variety of ways.
- Adds an "AUTO_" prefix to PROFILER_LABEL and PROFILER_LABEL_DYNAMIC, to make
it clearer they construct RAII objects rather than just being function calls.
(I myself have screwed up the scoping because of this in the past.)
- Fills in the 'js::ProfileEntry::Category::' qualifier within the macro, so
the caller doesn't need to. This makes a *lot* more of the uses fit onto a
single line.
The patch also makes the following changes to the macro uses (beyond those
required by the changes described above).
- Fixes a bunch of labels that had gotten out of sync with the name of the
class and/or function that encloses them.
- Removes a useless PROFILER_LABEL use within a trivial scope in
EventStateManager::DispatchMouseOrPointerEvent(). It clearly wasn't serving
any useful purpose. It also serves as extra evidence that the AUTO_ prefix is
a good idea.
- Tweaks DecodePool::SyncRunIf{Preferred,Possible} so that the labelling is
done within them, instead of at their callsites, because that's a more
standard way of doing things.
--HG--
extra : rebase_source : 318d1bc6fc1425a94aacbf489dd46e4f83211de4
This patch reduces the differences between builds where the profiler is enabled
and those where the profiler is disabled. It does this by removing numerous
MOZ_GECKO_PROFILER checks.
These changes have the following consequences.
- Various functions and classes are now defined in all builds, and so can be
used unconditionally: profiler_add_marker(), profiler_set_js_context(),
profiler_clear_js_context(), profiler_get_pseudo_stack(), AutoProfilerLabel.
(They are effectively no-ops in non-profiler builds, of course.)
- The no-op versions of PROFILER_* are now gone. The remaining versions are
almost no-ops when the profiler isn't built.
--HG--
extra : rebase_source : 8fb5e8757600210c2f77865694d25162f0b7698a
The problem is if we found a cache hit, then we could go through
ValidateRequestWithNewChannel to validate the cache.
Then if the CSP check fail(asyncOpen2() will fail), then the
imgRequestProxy will remain there, and cause the timeout.
I run into problem when running mochitest
browser/base/content/test/general/browser_aboutHome.js in non-e10s mode.
In the beginning, browser.xul will load defaultFavicon.png, will create
an image cache there.
Next time when the test starts to run, when it loads about:home, then it
will try to load defaultFavicon.png, it will found an image cache hit
(loaded previously by browser.xul), and call
ValidateRequestWithNewChannel there, however the asyncOpen2 call failed,
and the imgRequestProxy is added to the loadGroup of about:home, and
never be notified until timeout.
A struct used during painting to provide input flags to determine how
imagelib draw calls should behave, plus an output DrawResult to return
information about the result of any imagelib draw calls that may have
occurred.
MozReview-Commit-ID: 3jGEh5vEPF
A struct used during painting to provide input flags to determine how
imagelib draw calls should behave, plus an output DrawResult to return
information about the result of any imagelib draw calls that may have
occurred.
MozReview-Commit-ID: 3jGEh5vEPF
--HG--
extra : rebase_source : 6a40d1bca79425dabf28538b5492cb090074223e
This part is mainly to mark the channel as urgent-start if src related
attributes in HTMLImageElement and HTMLInputElement is set and the channel is
open due to user interaction. Unfortunately, we cannot just check the event
state just after creating channel since some loading image tasks will be queue
and execute in stable state. Thus, I store the event state in elements and
pass it to the place where create the channel.
MozReview-Commit-ID: GBdAkPfVzsn
--HG--
extra : rebase_source : 715352317b4b600f8a7f78b7bc22b894bb272d27
We invalidate if FrameAnimator::UpdateState marks the composited frame as valid, but we fail to do so if FrameAnimator::RequestRefresh does so.
The other place that sets the composited frame as valid is RasterImage::Decode. This call is actually redundant because the UpdateState call will do the same.
Even though we will invalidate when the decode produces results we could still draw incorrectly if something else invalidates. In which case we would only draw the part of the image that was invalidated. But that should actually be impossible as explained in the comment.
We invalidate if FrameAnimator::UpdateState marks the composited frame as valid, but we fail to do so if FrameAnimator::RequestRefresh does so.
The other place that sets the composited frame as valid is RasterImage::Decode. This call is actually redundant because the UpdateState call will do the same.
Even though we will invalidate when the decode produces results we could still draw incorrectly if something else invalidates. In which case we would only draw the part of the image that was invalidated. But that should actually be impossible as explained in the comment.
We do this to avoid having one of the image loads happen before the cache entry for the image times out and the other image load after the cache entry times out.
--HG--
rename : image/test/mochitest/damon.jpg => image/test/mochitest/bug1217571.jpg
This change moves most of the logic for the threadsafety check into
nsAutoOwningThread, rather than having part of the logic live in
nsAutoOwningThread and part of the logic live in nsDebug.h. Changing
this also forces us to clean up a couple of places that replicated the
logic that lived in nsDebug.h as well.