This should be mostly straight-forward, since we have code for this
anyways for image-set() and srcset.
The only thing is that we were using floats for resolution, but since
EXIF allows you to scale each axis separately, we now need to pass an
image::Resolution instead.
The main outstanding issue is the spec comment mentioned in the previous
patch, about what happens if you have srcset/image-set and the image
density specified together. For now I've implemented what the
image-set() spec says, but this is subject to change before shipping of
course.
Differential Revision: https://phabricator.services.mozilla.com/D113265
This should be mostly straight-forward, since we have code for this
anyways for image-set() and srcset.
The only thing is that we were using floats for resolution, but since
EXIF allows you to scale each axis separately, we now need to pass an
image::Resolution instead.
The main outstanding issue is the spec comment mentioned in the previous
patch, about what happens if you have srcset/image-set and the image
density specified together. For now I've implemented what the
image-set() spec says, but this is subject to change before shipping of
course.
Differential Revision: https://phabricator.services.mozilla.com/D113265
In some BMP files, data offset is set to 0 and in this case pixel data immediatly follow color table so don't fail in such a case.
Differential Revision: https://phabricator.services.mozilla.com/D113395
Moving mIsDrawing to SVGDocumentWrapper allows us to avoid depending on
VectorImage in a future patch, and instead only keep a reference to
SVGDocumentWrapper. This helps avoid a circular dependency.
Differential Revision: https://phabricator.services.mozilla.com/D111903
COM is required for calls to SHGetFileInfo for moz-icon, but we only currently
require that for the file content process. We may want to use it in the future
for the privileged about content process, in which case we will have to forgo
win32k lockdown there unless moz-icon is remoted.
Depends on D112960
Differential Revision: https://phabricator.services.mozilla.com/D112961
The assert in question is overly conservative. The dirty rects may be
calculated using a more conservative estimate of the whole frame, and
after the animation has completed its first pass, we may have determined
that it is unnecessarily too large and shrink it accordingly. There may
still be frames lingering with the old larger rect however, and trip
this assert falsely.
Differential Revision: https://phabricator.services.mozilla.com/D113155
This removes the code in Gecko that was only allowing opaque
compositor surfaces, now that the underlying work in WR and
SWGL to support these is complete.
Differential Revision: https://phabricator.services.mozilla.com/D112831
This makes us match Blink and WebKit on how to render an iframe whose src
attribute is an image URL. They seem to have always used 0 margin in this
case, and this seems preferable from an author's perspective (since the
standard HTML-body margin feels kind of arbitrary, when viewing an image).
Note that this change does also mean that images will be slightly closer to the
upper-left of the page, if they're viewed directly and then printed. This
shouldn't cause them to be clipped or cause other issues; they'll still be
offset from the page edge by the printed-page margins, as well as any
unprintable areas that we get from the printer.
Differential Revision: https://phabricator.services.mozilla.com/D110294
After applying D102448,
uriloader/exthandler/tests/mochitest/test_nullCharFile.xhtml starts to fail.
The reason is that it adds image sniffer into net-content-sniffers which is not
expected.
Such that, this patch
- adds two other sniffers category:
- orb-content-sniffers
- The sniffers that are needed in ORB.
- net-and-orb-content-sniffers
- The sniffers that are in either orb-content-sniffers or net-content-sniffers.
- changes the way to ensure we only use the sniffers in the
requested category.
Differential Revision: https://phabricator.services.mozilla.com/D107207
After applying D102448,
uriloader/exthandler/tests/mochitest/test_nullCharFile.xhtml starts to fail.
The reason is that it adds image sniffer into net-content-sniffers which is not
expected.
Such that, this patch
- adds two other sniffers category:
- orb-content-sniffers
- The sniffers that are needed in ORB.
- net-and-orb-content-sniffers
- The sniffers that are in either orb-content-sniffers or net-content-sniffers.
- changes the way to ensure we only use the sniffers in the
requested category.
Differential Revision: https://phabricator.services.mozilla.com/D107207
- Add missing include directives and forward declarations.
- Remove some extra include directives.
- Add missing namespace qualifications.
- Move include directives out of namespace in toolkit/xre/GlobalSemaphore.h
Differential Revision: https://phabricator.services.mozilla.com/D98894
If GetAnimated fails when we're getting a static request, because e.g.
the image is not yet fully loaded, we still create a frozen image, but
in that case we can do better and avoid the WR fallback.
Depends on D108197
Differential Revision: https://phabricator.services.mozilla.com/D108198
Note that this patch only transforms the use of the nsDataHashtable type alias
to a directly equivalent use of nsTHashMap. It does not change the specification
of the hash key type to make use of the key class deduction that nsTHashMap
allows for in some cases. That can be done in a separate step, but requires more
attention.
Differential Revision: https://phabricator.services.mozilla.com/D106008
Well, mostly thread-safe, in the sense that on shutdown we might free
them, but that is pre-existing and can't happen for the code-path that
I'm about to touch.
We could probably just avoid freeing these transforms if we wanted...
Differential Revision: https://phabricator.services.mozilla.com/D104946
Well, mostly thread-safe, in the sense that on shutdown we might free
them, but that is pre-existing and can't happen for the code-path that
I'm about to touch.
We could probably just avoid freeing these transforms if we wanted...
Differential Revision: https://phabricator.services.mozilla.com/D104946
This makes the naming more consistent with other functions called
Insert and/or Update. Also, it removes the ambiguity whether
Put expects that an entry already exists or not, in particular because
it differed from nsTHashtable::PutEntry in that regard.
Differential Revision: https://phabricator.services.mozilla.com/D105473
There are no code changes, only #include changes.
It was a fairly mechanical process: Search for all "AUTO_PROFILER_LABEL", and in each file, if only labels are used, convert "GeckoProfiler.h" into "ProfilerLabels.h" (or just add that last one where needed).
In some files, there were also some marker calls but no other profiler-related calls, in these cases "GeckoProfiler.h" was replaced with both "ProfilerLabels.h" and "ProfilerMarkers.h", which still helps in reducing the use of the all-encompassing "GeckoProfiler.h".
Differential Revision: https://phabricator.services.mozilla.com/D104588
CLOSED TREE
Backed out changeset f6519420f910 (bug 1678487)
Backed out changeset 9beae015d19b (bug 1678487)
Backed out changeset 029cc10d2477 (bug 1678487)
Well, mostly thread-safe, in the sense that on shutdown we might free
them, but that is pre-existing and can't happen for the code-path that I'm
about to touch.
We could probably just avoid freeing these transforms if we wanted...
Differential Revision: https://phabricator.services.mozilla.com/D104946
This new approach to weak references is roughly modeled after the approach used
by Rust's Arc<T>, and uses an atomic compare-and-swap loop to perform weak to
strong reference upgrades. This approach ends up moving the strong reference
count out of the tracked object and into the weak reference object, as the
strong reference count atomic needs to outlife the object itself.
Rust's Arc Weak::upgrade implementation:
d98d2f57d9/library/alloc/src/sync.rs (L1806-L1837)
Differential Revision: https://phabricator.services.mozilla.com/D102245
This new approach to weak references is roughly modeled after the approach used
by Rust's Arc<T>, and uses an atomic compare-and-swap loop to perform weak to
strong reference upgrades. This approach ends up moving the strong reference
count out of the tracked object and into the weak reference object, as the
strong reference count atomic needs to outlife the object itself.
Rust's Arc Weak::upgrade implementation:
d98d2f57d9/library/alloc/src/sync.rs (L1806-L1837)
Differential Revision: https://phabricator.services.mozilla.com/D102245
The only way that this can happen is if we get through the
ShouldTreatAsCompleteDueToSyncDecode check returning true in the case of
the image being errored.
https://hg.mozilla.org/integration/autoland/rev/645a4d6461ca was
supposed to deal with this, but my guess is that there is a slight race
condition in which the error status isn't there at the beginning, but is
there after the StartDecoding call.
It seems returning BAD_IMAGE rather than painting transparent if we hit
a broken image is a better thing to do than what we're doing now, and
should fix the intermittent issue.
Differential Revision: https://phabricator.services.mozilla.com/D101361
From what I can see in source history, pal8v4.bmp started out with a fuzzy of (3, 6376) on all platforms. Then one day aarch64-windows started producing a result of (1, 899) and so that platform's expectations were adjusted.
In the upcoming clang 12, the behavior of this test gets "fixed" in that we go back to those 6376 differing pixels like on other platforms. The max difference rises to 4 though. In light of the fact that aarch64-windows is seeing less priority these days, I can't justify digging into the exact code reason for the change, so this patch just updates the fuzzy setting to allow these values.
Differential Revision: https://phabricator.services.mozilla.com/D100823
This patch implements the transparency support for AVIF image files.
To convert the decoded YCbCr and Alpha data to RGBA, a function named
`ConvertYCbCrAToARGB` is created to do this job in the following
procedure:
ConvertYCbCrAToARGB:
If the layout of the YCbCr is I420
Calling libyuv::I420AlphaToARGB
Else
Fill RGB data converted by ConvertYCbCrToRGB in ARGB buffer first
Insert the alpha data to ARGB buffer
On the other hand, this patch refactors the nsAVIFDecoder a bit to make
the lifetime of the parsed data and decoded image data clearer. They
won't live longer than Parser object and {Dav1d, AOM}Decoder object.
This should improve the code readability.
This patch also adds a transparent image test (TransparentAVIFTestCase)
to check the FLAG_HAS_TRANSPARENCY is posted or not. The test image file
`transparent.avif` is from Bug 1654462.
Differential Revision: https://phabricator.services.mozilla.com/D98951
Import the improvements made in mp4parse-rust repo. The changes would
save some redundant copy when calling avif related APIs and provide the
ability to get the alpha data of the parsed avif image.
Differential Revision: https://phabricator.services.mozilla.com/D98950
Import the improvements made in mp4parse-rust repo. The changes would
save some redundant copy when calling avif related APIs and provide the
ability to get the alpha data of the parsed avif image.
Differential Revision: https://phabricator.services.mozilla.com/D98950
I was running into issues where these names would conflict with the type's own Get/Set methods
and these names have the added benefit of indicating a bit more that atomic stuff is going on.
Differential Revision: https://phabricator.services.mozilla.com/D99268
When loading a progressive jpegs with non-interleaved DC scans,
we add a check to see if we have already seen data from all DC
channels and only start rendering when this is the case.
Differential Revision: https://phabricator.services.mozilla.com/D95683
In https://hg.mozilla.org/integration/autoland/rev/471ad96ddc3e we changed this from calling |Surface().IsFullyDecoded()| to calling Seek because IsFullyDecoded always returns false for large enough animations because we don't keep around all of their frames.
In the old model, where mIsCurrentlyDecoded tracks if the image is fully decoded, we don't want to set mCompositedFrameInvalid to true if simply if the image isn't fully decoded because the current frame could be available.
In the new model, where mIsCurrentlyDecoded tracks if the current frame is available, we definitely do want to mark mCompositedFrameInvalid if the current frame is not available.
Depends on D96944
Differential Revision: https://phabricator.services.mozilla.com/D96945
In https://hg.mozilla.org/integration/autoland/rev/471ad96ddc3e we changed this from calling |Surface().IsFullyDecoded()| to calling Seek because IsFullyDecoded always returns false for large enough animations because we don't keep around all of their frames.
However RequestRefresh/AdvanceFrame call GetFrame. Seek is intended to be used for display purposes, GetFrame is intended to be used for advancing the frame purposes. And Seek and GetFrame can actually returns different results if it is the first frame that we are asking for.
Differential Revision: https://phabricator.services.mozilla.com/D96944
Add a telemetry probe tracking the errors in dav1d or aom decoder. We
only care the errors from `aom_codec_decode` and `dav1d_get_picture` so
other error code won't be counted.
Differential Revision: https://phabricator.services.mozilla.com/D94489
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
sizemode/displaymode media queries only affect a given browsing context
tree so there's no need to propagate the change to images in that case.
Differential Revision: https://phabricator.services.mozilla.com/D94422
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
This changes the way we deal with page use counters so that we can
handle out of process iframes.
Currently, when a parent document is being destroyed, we poke into all
of the sub-documents to merge their use counters into the parent's page
use counters, which we then report via Telemetry. With Fission enabled,
the sub-documents may be out of process. We can't simply turn these
into async IPC calls, since the parent document will be destroyed
shortly, as might the content processes holding the sub-documents.
So instead, each document during its initialization identifies which
ancestor document it will contribute its page use counters to, and
stores its WindowContext id to identify that ancestor. A message is
sent to the parent process to notify it that page use counter data will
be sent at some later point. That later point is when the document
loses its window. It doesn't matter if the ancestor document has
already been destroyed at this point, since all we need is its
WindowContext id to uniquely identify it. Once the parent process has
received all of the use counters it expects to accumulate to a given
WindowContext Id, it reports them via Telemetry.
Reporting of document use counters remains unchanged and is done by each
document in their content process.
While we're here, we also:
* Limit use counters to be reported for a pre-defined set of document
URL schemes, rather than be based on the document principal.
* Add proper MOZ_LOG logging for use counters instead of printfs.
Differential Revision: https://phabricator.services.mozilla.com/D87188
We use it to merge blobs, but that doesn't match the spec. Per:
https://html.spec.whatwg.org/#the-list-of-available-images
We should key off the URI, and thus a revoked image should be able to be
reused if the request is cached. This works to allow printing revoked
blob images, which other browsers allow.
I don't see any way to make that work easily while preserving the blob
image optimization. That being said, it seems other browsers also
re-decode when creating different URLs for the same blob (see the
test-case attached to the bug), so it seems we should be able to live
without it.
Differential Revision: https://phabricator.services.mozilla.com/D90544
This patch was generated by running:
```
perl -p -i \
-e 's/^(\s+)([a-zA-Z0-9.]+) = NS_ConvertUTF8toUTF16\((.*)\);/\1CopyUTF8toUTF16(\3, \2);/;' \
-e 's/^(\s+)([a-zA-Z0-9.]+) = NS_ConvertUTF16toUTF8\((.*)\);/\1CopyUTF16toUTF8(\3, \2);/;' \
$FILE
```
against every .cpp and .h in mozilla-central, and then fixing up the
inevitable errors that happen as a result of matching C++ expressions with
regexes. The errors fell into three categories:
1. Calling the convert functions with `std::string::c_str()`; these were
changed to simply pass the string instead, relying on implicit conversion
to `mozilla::Span`.
2. Calling the convert functions with raw pointers, which is not permitted
with the copy functions; these were changed to invoke `MakeStringSpan` first.
3. Other miscellaneous errors resulting from over-eager regexes and/or the
replacement not being type-aware. These changes were reverted.
Differential Revision: https://phabricator.services.mozilla.com/D88903
The abstract observer base classes are moved to a separate header file
nsRefreshObservers.h and the includes are adjusted accordingly.
Some method implementations are moved to the corresponding implementation files
to avoid the need to include the nsRefreshDriver.h file in the header.
Differential Revision: https://phabricator.services.mozilla.com/D85764