Use the new helper functions instead of calling libvpx directly.
This simplifies adding other codecs in the future.
MozReview-Commit-ID: 8VX0d5S50EE
--HG--
extra : rebase_source : c870b32bac6b924188dd722c052fb88156ad96c8
Encapsulate code from WebMDemuxer to query keyframe and frame
resolution inside VPXDecoder, so we have a clean wrapper for
all the libvpx functions we use.
MozReview-Commit-ID: ASRRhNl0A41
--HG--
extra : rebase_source : a1421462f6fc66a2abd965782ec408a8bcf7fe1f
Use the enum we already have here instead of converting
to an int when we pass it around, giving us better
type checking.
MozReview-Commit-ID: Gj4xmtQnzw2
--HG--
extra : rebase_source : 95f582e655f1a942dfb68cbba588c44afbb8a38f
This simplifies the comparison and update logic.
MozReview-Commit-ID: A6YII8tlEUn
--HG--
extra : rebase_source : ddf4304298209e515eb44962e8bc9ccd38c9956f
LocaleService serves two main functions. It is a central place for all code in the
engine to learn about locales, but it also does the language negotiation and selection.
The former is relevant in all processes, but the latter should only be performed
by the "main" process. In case of current Desktop Firefox, the parent process
is the one performing all the language negotiation, and content processes should
operate in the "client" mode.
In Fennec, there's a Java app on top of Gecko which should work as a "server"
and then all processes, including parent process of Gecko is merely a "client" for that.
This refactor finalizes this duality making it easily configurable to define in
which mode a given LocaleService operates.
The server-client model allows all clients to stay in sync with the server,
but operate transparently for all callers just returning the right values.
In order to initialize LocaleService in the client mode in child process with the
right locales I'm adding the list of app locales to the XPCOMInitData,
and then fire LocaleService::SetAppLocales in the child process initialization.
In order to keep the list up to date, I'm adding intl:app-locales-changed to
the list of observed topics, and when triggered, I send the updated list
to the child process, which updates LocaleService::SetAppLocales with the new
list.
MozReview-Commit-ID: K9X6berF3IO
--HG--
extra : rebase_source : ca5e502d064023fddfd63fe6fe5eccefce8dee52
Practical changes:
1) Some additional method arguments are nullable or optional, which
matches Chrome/WebKit. They make more sense non-nullable and
non-optional, but Chrome is afraid of the compat impact of changing.
2) Added [CEReactions] to deleteFromDocument().
MozReview-Commit-ID: Kg9EDubnEui
--HG--
extra : rebase_source : 1d47ee0b12b0b719159c326f789dd6e6b6000c8e
It's not only inefficient, but also prone to buggyness. Since styles may not be
up-to-date when it happens.
Post a reconstruct instead, which ensures a style flush happens before running
frame construction.
MozReview-Commit-ID: DrakHsJv5fY
Signed-off-by: Emilio Cobos Álvarez <emilio@crisal.io>
--HG--
extra : rebase_source : 11900af908654336cc2391ab9480542c5474e38f
Other browsers do not support any of these (IIRC), telemetry reports
essentially zero usage, and supporting them is contrary to the DOM spec.
Notes on specific events:
CommandEvent and SimpleGestureEvent: These are not supposed to be
web-exposed APIs, so I hid the interfaces from web content too
(necessary to avoid test_all_synthetic_events.html failures).
DataContainerEvent: This was a non-standard substitute for CustomEvent
that seemed to have only one user, so I removed it entirely and switched
the user (MozillaFileLogger.js) to CustomEvent.
ScrollAreaEvent: This is entirely non-standard, but we apparently expose
it deliberately to web content, so I didn't see any reason to remove it
from createEvent.
SimpleGestureEvent and XULCommandEvent: Can still be created from
createEvent(), but not by content.
TimeEvent: This is still in because it has no constructor, so there's no
other way to create it. Ideally we'd update the SMIL spec to add a
constructor. I did remove TimeEvents.
MozReview-Commit-ID: 7Yi2oCl9SM2
--HG--
extra : rebase_source : 199ab921acfc531b8b85e77f90fcd799b03c887b
If a child-to-parent IPCBlob is more than 1mb, we end up using a pipe stream.
If that ipcBlob is sent to a different process, we need to implement asyncWait
correctly: we need to call the remoteStream->AsyncWait().
nsInputStreamPump should use the stream as nsIAsyncInputStream if available.
In order to do so, we need to wrap a BufferedStream around it.
MediaResource cannot use a simple sync nsIInputStream when BlobURL are involved
in the content process.
This patch will go away when I'll finishing the removing of PBlob. Currently,
when a PBlob is sent from child to parent, we use PMemoryStream in order to
recreate the inputStream on the parent side. PMemoryStream sends the data in
chunks.
But if PBlob is dealing with a IPCBlobInputStream, it doesn't have access to
the real data. In this case, we must send data using IPCStream. In this way,
Note that thisIPCBlobInputStream will send its ID, and the parent will take the
real inputStream from the IPCBlobInputStreamStorage. Note that I check the
size to be 1mb instead 0. No particular reasons, but better to avoid the use of
PMemoryStream for nothing.
Currently FileReader API uses a nsITransport. This is not needed if the
inputStream is a nsIAsyncInputStream already, and IPCBlobInputStream is always
a nsIAsyncInputStream.
Note that, we must create a bufferedStream in order to use ReadSegments in case
the remote inputStream, received by IPCBlobInputStream, is a FileInputStream.
This was not needed with nsITransport.
IPCBlobInputStream must implement nsIIPCSerializableInputStream interface.
When this is done, the child sends the internal ID of the IPCBlobInputStream to
the parent.
An IPCBlobInputStream can be sent back to the parent process (not implemented
in this patch). When this is done, we basically send only the internal ID.
From this ID, we can retrieve the original inputStream because any
IPCBlobInputStreamParent actor has previously registered it into a singleton:
IPCBlobInputStreamStorage.
So, if we have a IPCBlobInputStreamParent, we have an inputStream, and this
inputStream is known by IPCBlobInputStreamStorage. This will be useful in the
next patches.
IPCBlobInputStream is a new type of nsIInputStream that is used only in content
process when a Blob is sent from parent to child. This inputStream is for now,
just cloneable.
When the parent process sends a Blob to a content process, it has the Blob and
its inputStream. With its inputStream it creates a IPCBlobInputStreamParent
actor. This actor keeps the inputStream alive for following uses (not part of
this patch).
On the child side we will have, of course, a IPCBlobInputStreamChild actor.
This actor is able to create a IPCBlobInputStream when CreateStream() is
called. This means that 1 IPCBlobInputStreamChild can manage multiple
IPCBlobInputStreams each time one of them is cloned. When the last one of this
stream is released, the child actor sends a __delete__ request to the parent
side; the parent will be deleted, and the original inputStream, on the parent
side, will be released as well.
IPCBlobInputStream is a special inputStream because each method, except for
Available() fails. Basically, this inputStream cannot be used on the content
process for nothing else than knowing the size of the original stream.
In the following patches, I'll introduce an async way to use it.
This is the first use of IPCBlob: ClonedMessageData.
ClonedMessageData is used for BroadcastChannel, MessagePort and any
postMessage() communication. This patch changes StructuredCloneData in order to
use IPCBlob instead of PBlob.
BroadcastChannel has a custom way to manage Blobs because when the parent
receives them from a content process, it must send them to any other
BroadcastChild actor duplicating the serialization.