Extract the common parts of `animated::Color` and `computed::Color` out
into `generics::color::Color<T>` that is generic over the type of
RGBA color.
MozReview-Commit-ID: EymSr7aqnAP
--HG--
extra : rebase_source : fb4afe83fc9ab5167ef5d9ecd55cbbb0d3ea8c1d
GCC doesn't like StyleComplexColor with constructor in an anonymous
struct in an anonymous union. Replace the use of a union to access
`mBorder[..]Color` fields as an array with an accessor methods.
MozReview-Commit-ID: 1Wulh1qKYCZ
--HG--
extra : rebase_source : 390b8f852d144a54d9d374bcf3ae70ab6d145d50
Before this patch, nsNSSComponent initialization would call PK11_ConfigurePKCS11
with some localized strings, which contributed to startup time. Also,
PK11_UnconfigurePKCS11 was never called, so the memory allocated to these
strings would stick around forever. This patch addresses both of these problems
by not calling PK11_ConfigurePKCS11. This means that some properties of NSS'
internal "PKCS#11 slots/tokens" have to be localized when displaying them to the
user.
MozReview-Commit-ID: BbAgbgpFfFG
--HG--
extra : rebase_source : b633da8fea683675d0c0514a378954332afeb024
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : rebase_source : 4918b8c3bac53d63665006802054038bfbca0314
* Add option for second focus point in PinchWithTouchInput
* Integrate content controller in PinchWithTouchInput
* Move PinchWithTouchInput implementation to APZCTesterBase
* Move CreateSingleTouchData to APZTestCommon.h
* Add optional PinchOptions parameter to PinchWithTouchInput
PinchOptions dictates which fingers should be lifted at
the end of the simulated pinch gesture. The default is to
lift BOTH fingers.
--HG--
extra : rebase_source : 9a2c5d1a6a8a6ed97869d3e9d643e2a805f5dfd0
extra : amend_source : b5d2976cc3c5203069a532d228e006365e4da320
Previously, the new wasm sign extension ops were part of the threads
proposal and so our test cases would guard on those ops using
wasmThreadsSupported().
But now that the extension ops have landed independently of the
threads proposal that guard is incorrect. So just remove the guard;
we don't need it.
--HG--
extra : rebase_source : de854748195f40978837217acbb1ed62fd977ed4
extra : histedit_source : 136233feee89f79ec514701a22fba85ac03d1a5f
Uses of StringHelper.get are replaced with the StringHelper instance in BaseRobocopTest.java and UITestContext.
MozReview-Commit-ID: FfMhdNCX1vC
--HG--
extra : rebase_source : c83977f5b9d2d70953c6a89aa30cd28501b5c994
Refactored StyleComplexColor to support "complex" blending between
background (numeric) color and foreground color (currentColor).
Made explicit the distinction between numeric, currentColor and a
complex blend in Gecko and Stylo.
This is to support SMIL animation, for example, of the form:
<animate from="rgb(10,20,30)" by="currentColor" ... />
MozReview-Commit-ID: IUAK8P07gtm
--HG--
extra : rebase_source : d3648101c6f65479b21e6f02945731cd5bb57663
After Bug 1451183 is landed, sEGLLibrary is undefined and we should use
gl::GLLibraryEGL::Get() instead of sEGLLibrary directly.
MozReview-Commit-ID: DNEkPIEqDpK
--HG--
extra : rebase_source : b07e04ce8a9fe50d72fd857e41c1448fa917d2a9
--enable-release not being passed means developer options are enabled,
which is generally speaking not desirable for builds meant to be
shipped. This is somewhat alleviated for Firefox by MOZILLA_OFFICIAL
implying --enable-release (as well as MOZ_AUTOMATION), but that doesn't
apply to e.g. standalone js builds (even some of the standalone js jobs
on our automation don't set MOZ_AUTOMATION for some reason).
A reasonable thing to do is just to default builds for release/beta
milestones to --enable-release, but still allow --disable-release
to enable the developer options.
--HG--
extra : rebase_source : 770d51b10a9cd17c63972435bb61eed10345ea71
For example, we don't run transform animations on large frames but once
the frame size is small enough we should run transform animations on the frame.
MozReview-Commit-ID: DkfB9g2Gcdi
--HG--
extra : rebase_source : 93a884c3063d4a6c2626e6695583664b24fe11c8
Moved RustSdpAttributeType and renamaed it to SdpAttributeType
Replaced the has_extmap_attribute functions by a generic has_attribute function
Added a sanity check, that checks for sendonly and if simulcast defines receive options
MozReview-Commit-ID: DXAEVu0SRap
--HG--
extra : rebase_source : c5732414f6e199c1e6a85d7d47921ca6ee601550
This patch will prevent the devtools shrink very narrow width in the case of separated window or browser toolbox.
MozReview-Commit-ID: GjMEOHNZ0O6
--HG--
extra : rebase_source : fa8aa64b7617300181d1be5dda983bc2c114bdbc
When a background page has a messaging listener but the background page
is started in response to something other than an extension message, we
were missing the background "startup" event. Fix that by setting up the
listener earlier.
MozReview-Commit-ID: Cr58EyCoY6W
--HG--
extra : rebase_source : 0d309e01be35e2ef7ab5e489726d0191571ca371
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93