mServiceChild is a UniquePtr, so nulling it will destroy the GMPServiceChild,
which will destroy the associated message channel. So we need to close the
channel first before it gets destroyed. (Just as it was correctly done in
Observe() above.)
MozReview-Commit-ID: INuHN2Is7bC
--HG--
extra : rebase_source : 2a927bb06dd8fb4f1114dc0b64025cbdddc7c133
Makes transfer of samples between the content and CDM processes use shmems.
The Chromium CDM API requires us to implement a synchronous interface to supply
buffers to the CDM for it to write decrypted samples into. We want our buffers
to be backed by shmems, in order to reduce the overhead of transferring decoded
frames. However due to sandboxing restrictions, the CDM process cannot allocate
shmems itself. We don't want to be doing synchronous IPC to request shmems
from the content process, nor do we want to have to do intr IPC or make async
IPC conform to the sync allocation interface. So instead we have the content
process pre-allocate a set of shmems and give them to the CDM process in
advance of them being needed.
When the CDM needs to allocate a buffer for storing a decrypted sample, the CDM
host gives it one of these shmems' buffers. When this is sent back to the
content process, we copy the result out (uploading to a GPU surface for video
frames), and send the shmem back to the CDM process so it can reuse it.
We predict the size of buffer the CDM will allocate, and prepopulate the CDM's
list of shmems with shmems of at least that size, plus a bit of padding for
safety. We pad frames out to be the next multiple of 16, as we've seen some
decoders do that.
Normally the CDM won't allocate more than one buffer at once, but we've seen
cases where it allocates two buffers, returns one and holds onto the other. So
the minimum number of shmems we give to the CDM must be at least two, and the
default is three for safety.
MozReview-Commit-ID: 5FaWAst3aeh
--HG--
extra : rebase_source : a0cb126e72bfb2905bcdf02e864dc654e8340410
This means we can pass anything that converts implicitly to a Span to
PostResult, including an nsTArray<uint8_t>. We can also pass a Span
that contains the contents of a Shmem's buffer.
MozReview-Commit-ID: 8AAcRmVCEVy
--HG--
extra : rebase_source : 44dfbc465db14bb689a653e6c0b3cbc626c0a0d1
This ensures that the IPC connection from the content process to the main
process is shut down as soon as possible. Once all the IPC connections are
closed, the main process removes its async shutdown blocker, and Firefox
can shutdown.
MozReview-Commit-ID: 8rqa384ayd9
--HG--
extra : rebase_source : b9cbbb9f4c22016284a8d49cddaea0d96666acf9
This ensures that when we've started shutdown we don't try to start up new
GMPs. Doing so would create more connections from the content process to the
main process, and the main process can't shutdown until all such connections
are shut down.
MozReview-Commit-ID: KE8nCoLXjdd
--HG--
extra : rebase_source : 674f3c4ddcb5bb93dd775a861b425d25510871e9
This will allow us to broadcast a notification to the GMPServices running in
the content processes when they need to shutdown.
MozReview-Commit-ID: FviFDgNMnUV
--HG--
extra : rebase_source : f4ad3c6df0e14c88db1199fbe6281d67f98590ae
When we shutdown the browser while the GMPService is active we can end up
leaking a GMPParent, GeckoMediaPluginServiceParent, and a Runnable. I tracked
this down to the runnable dispatched to the GMP thread in
GMPParent::ChildTerminated(). The dispatch of this runnable is failing as we
are dispatching the runnable to a reference of the GMP thread which we have
previously acquired, but that thread is now shutdown. So the dispatch fails,
and if you look in nsThread::DispatchInternal() you'll see that we deliberately
leak the runnable if dispatch fails! The runnable leaking means that the
references it holds to the GMPParent and the GMP service parent leak.
The solution in this patch is to not cache a reference to the GMP thread on the
GMPParent; instead we re-request the GMP thread from the GMPService when we
want it. This means that in the case where the browser is shutting down,
GMPParent::GMPThread() will return null, and we'll not leak the runnable. We'll
then follow the (hacky) shutdown path added in bug 1163239.
We also need to change GMPParent::GMPThread() and GMPContentParent::GMPThread()
to return a reference to the GMP thread with a refcount held on it, in order
to ensure we don't race with the GMP service shutting down the GMP thread
while we're trying to dispatch to in on shutdown.
MozReview-Commit-ID: CXv9VZqTRzY
--HG--
extra : rebase_source : e507e48ee633cad8911287fb7296bbb1679a7bcb
This is required for the browser clearing persistence tests to pass.
MozReview-Commit-ID: Ai9qc6Ds1IG
--HG--
extra : rebase_source : 80c2133e26742410fda983e3c18c35736fc013d0
This severs the ChromiumCDMVideoDecoder's connection with the CDM. The CDM process
will shutdown when the MediaKeys also severs its connection.
MozReview-Commit-ID: Aqc4y5Nxjvc
--HG--
extra : rebase_source : 5a2f77ffe84f9b99b4668520c838b29a428578d3
At this stage, I store video frames in memory in nsTArrays rather than in
shmems just so we can get this working. Once this is working, I'll follow up
with patches to switch to storing all large buffer traffic between the CDM and
other processes in shmems.
I'm not planning on preffing this new CDM path on until that's in place.
MozReview-Commit-ID: LSTb42msWQS
--HG--
extra : rebase_source : b7f162515a1a32b2c344c11d0fa5c7004cec2e15
The MediaKeys accesses the ChromiumCDMProxy on the main thread. But the
ChromiumCDMVideoDecoder will need to access the ChromiumCDMProxy on the decode
task queue in order to get a reference to the ChromiumCDMParent so that it can
talk to the CDM (on the GMP thread).
Additionally we'll need to shutdown the ChromiumCDMProxy, and if we do that
on the main threrad while the ChromiumCDMVideoDecoder is trying to get the
ChromiumCDMParent reference, we could hit thread safety issues.
So we need to hold a lock while reading or writing from the ChromiumCDMProxy's
reference to the ChromiumCDMParent. So add a GetCDMParent() function to the
ChromiumCDMProxy which takes the lock while reading or writing the reference.
This means that the caller will always get a valid reference. There is no guarantee
that the ChromiumCDMParent isn't shutdown after the reference is taken; if that
happens, the ChromiumCDMParent returned will fail on all operations.
In a later patch in this series, the ChromiumCDMProxy will anull its reference
to the ChromiumCDMParent on shutdown, and cause GetCDMParent to return null.
So callers need to null check the return value of GetCDMParent.
MozReview-Commit-ID: 4xL41YbwkxL
--HG--
extra : rebase_source : aa854e9d88965d7da60231d6f6a3912bf6ad2eeb
This means the EME PDM implementation can safely tell when a CDMProxy is a
ChromiumCDMProxy, so we can create an appropriate MediaDataDecoder for it (in
the next patch).
MozReview-Commit-ID: CpL6QRa7SwJ
--HG--
extra : rebase_source : 3821c378c73067066f3cc67499680bdf546fb4f0
This ensures that when we're using the ChromiumAdapter that we actually ask it
whether it'll work, rather than asking the adapter we're not using.
MozReview-Commit-ID: 85nZPl9MdWa
--HG--
extra : rebase_source : 90de89bec9b004859c3c2c09ed8efbd255acc141
We still use the same EMEDecryptor MediaDataDecoder as is used by the existing
EME decrypting path.
MozReview-Commit-ID: 3pXPjChctLb
--HG--
extra : rebase_source : 67575a02290ddb871510dd88f59fdab77658b3ce
This means the MediaKeys is able to create a CDM.
MozReview-Commit-ID: 94Xc7sCLhH3
--HG--
extra : rebase_source : 914db1f04e0770776ae25c7b8bdc59e729fe78d0
This will eventually replace GMPCDMProxy. Methods will be implemented in later
patches.
MozReview-Commit-ID: 86pwo81tFZv
--HG--
extra : rebase_source : df41a20a0fefaf26a63ed18f1ccdf7fa5a3a1e89
We currently use an adapter object to adapt plugins that don't conform to the
GMP interface to the GMP interface.
We use the WidevineAdapter to talk to the CDM from the two GMP IPDL protocols.
We will be using a single protocol to talk to the Chromium CDM, so we need a
new adapter which handles that.
MozReview-Commit-ID: F7hnZ9oo9mJ
--HG--
rename : dom/media/gmp/widevine-adapter/WidevineAdapter.cpp => dom/media/gmp/ChromiumCDMAdapter.cpp
rename : dom/media/gmp/widevine-adapter/WidevineAdapter.h => dom/media/gmp/ChromiumCDMAdapter.h
extra : rebase_source : 7c08edea3c11d41eb3ecfa9c7a8ef65cf3b8ddb0
Infrastructure necessary to create an instance of the CDM process.
MozReview-Commit-ID: 7oQ86x6BNWj
--HG--
extra : rebase_source : c725a958c507b7f93ce9cfccc475f259ae9ccbc2
We currently do two sync IPCs to launch a GMP; one from content to main process
to get the nodeId and a second to get a GMPContentParent for that nodeId.
We use the nodeIds to ensure that the GMPVideoDecoder and GMPDecryptor actors
correspond to the same CDM instance/process. However once we switch to having
one protocol that encompasses both decryption and decoding, we don't need to
worry about making sure our decoder and decryptor actors match up, as we only
have one underlying connection to the CDM instance.
So we can merge the get nodeId and get GMPContentParent operations into a
single operation that does both. To do this, we just need to pass the
parameters used to calculate the nodeId in the LaunchGMP message.
Once we've switched EME over to using the CDM via a single actor, we can remove
the nodeId nsCString from our media code and from GMPVideoDecoder and
GMPVideoEncoder.
MozReview-Commit-ID: 7GXlJ37fOTZ
--HG--
extra : rebase_source : cf20a165048f777f34dab01fce984018ad641b85
The implementations of this protocol will be stubbed out in later patches.
MozReview-Commit-ID: 622CB1BOoR9
--HG--
extra : rebase_source : b796bfb4c0d0d2872787043e3b9fc83a0e6b09ea
The clock that GMP currently exposes to CDMs has second precision. Whereas the
clock that Chromium exposes to CDMs has microsecond precision. We should use
the same clock as Chromium does (since we have its code in our tree already) so
that our CDM harness is as compatible to Chromium as possible.
MozReview-Commit-ID: FssZZFg4vhn
--HG--
extra : rebase_source : 8fab078ba0ecf351a9a8147d3f7434d40a2e0a25
This menas we can have GMPVideoDecoder's AVCC -> AnnexB conversion done by the H264Converter, and
simplify the code in WidevineVideoDecoder.
MozReview-Commit-ID: 3HT5VXth6LL
--HG--
extra : rebase_source : b840489edafa5dc981ba44f722d92083a40e34cd
The work I did in bug 1306314 seems to have either regressed or never worked
properly for multiple CDM same origin processes. I'd guess the decryptor IPDL
protocol actor ID must not be as unique as I thought. So if we just use a
counter managed by the GMPDecrytorChild, we'll get a per CDM unique ID, which
is sufficient.
MozReview-Commit-ID: KSh72ptX5fn
--HG--
extra : rebase_source : 9dd558aa9b2e9154e70fc328009b79e1daa884b2
This makes it easier to reuse in the ChromiumCDM code.
Also add an ExtractBuffer() method, which allows us to Move() the contained nsTArray
out without needing to copy the data.
MozReview-Commit-ID: 9suJSfXTVYy
--HG--
extra : rebase_source : 6eec99eb5329f3b8c3bb14d22459fee3bd95caf5
This makes it easier to reuse in the ChromiumCDM code.
Also add an ExtractBuffer() method, which allows us to Move() the contained nsTArray
out without needing to copy the data.
MozReview-Commit-ID: 9suJSfXTVYy
--HG--
extra : rebase_source : 89540b254249833cf8bb09792bb33cc402977d5a
This means we can reuse LogToConsole inside the new CDM decoder backend.
This change also makes GMPUtils.cpp build in non-unified build mode.
MozReview-Commit-ID: AFkdHIos4X2
--HG--
extra : rebase_source : d31e794ce94fa724a90b1cfa842a86d119a4e2d1
extra : source : 6cad0b06a556795f6d6de123bb5a153ff06062f5
This prevents the Log macro from colliding with the Log function on
IPC ParamTraits definitions.
MozReview-Commit-ID: Hd2v6ilbmGc
--HG--
extra : rebase_source : d26d495878706fe5a2009dd33d226cc71193be13
Sometimes the build breaks because this file uses nsPrintfCString but
doesn't include its header.
MozReview-Commit-ID: CcawXkMucdA
--HG--
extra : rebase_source : 3b36138053c1ffa557fd59af37cf1cfa4166493a
The job id is just a counter, so rather than have other users of DecryptJob
reimplement their own counter, we can push the id/counter code into DecryptJob
itself.
MozReview-Commit-ID: 3RB8ctplWkK
--HG--
extra : rebase_source : f6cd7fddb2bf208419cf314cd7b01c508d68b49e
extra : source : 9b18193d0c1ccdedec25f7395fe124a86660b9d5
This means it can be used in other CDMProxies, specifically the
upcoming Chromium CDM code in bug 1315850.
MozReview-Commit-ID: G26xclqhtSw
--HG--
extra : rebase_source : 041c2cb41ba444e0dea8de3ddcc6a119d480f4f7
extra : source : c7f66edac83a6662d99f59a48f70c539f8ccecc8
The job id is just a counter, so rather than have other users of DecryptJob
reimplement their own counter, we can push the id/counter code into DecryptJob
itself.
MozReview-Commit-ID: 3RB8ctplWkK
--HG--
extra : rebase_source : 7ee0e48ab7638d1f713ee6dd852ab4b5f91119e8
extra : source : 9b18193d0c1ccdedec25f7395fe124a86660b9d5
This means it can be used in other CDMProxies, specifically the
upcoming Chromium CDM code in bug 1315850.
MozReview-Commit-ID: G26xclqhtSw
--HG--
extra : rebase_source : c4c99063f7423b8cbbd49865d2d45eca32c254ce
extra : source : c7f66edac83a6662d99f59a48f70c539f8ccecc8
This means we don't need to include GMPService.h in GMPCrashHelperHolder.h,
in order to use the GMPService inside GMPCrashHelperHolder.
This which prevents an inclusion cycle between GMPService.h and GMPCrashHelperHolder.h
in another patch I'm working on.
MozReview-Commit-ID: AbcXvv4UMyl
--HG--
extra : rebase_source : bdae6e1fbbbe8ce4100f51d2339f69c23f12859f
Fixed coding style of files encountered in P1 and P2.
MozReview-Commit-ID: LApVu9K2oto
--HG--
extra : rebase_source : e3bb296baaec9df2011ff312fec2eda19dd125e6
This works, at least on Windows, if the NSPR_LOG_FILE is set at a file
in the OS temp dir. This means we can turn on CDM logging in release
builds, in the sandboxed child process, without needing to recompile
to #define on logging.
This will make debugging issues with the CDM easier.
MozReview-Commit-ID: 6cAxMy4lv3T
--HG--
extra : rebase_source : eb75bba8e0dc38d1a0137cef28b7589ded43351a
Note: Only the Adobe GMP used enum storage, so not that it's unused we may
as well remove this.
MozReview-Commit-ID: JtmQ69eJzaI
--HG--
extra : rebase_source : 29929e680dc1692b957b34ce274c4944743768e8
GMP gtests fail on ASAN builds now since the GMPLoader requires a sandbox
starter, and ASAN doesn't run with a GMP sandbox. So only enforce that we
need a sandbox starter if we've built with sandboxing enabled.
MozReview-Commit-ID: GptxIZ7TFIy
--HG--
extra : rebase_source : 6265f91a9c80555b63f71ac5da116450d4728df1
The GMPLoader code was in plugin-container so that it was covered by
Adobe's voucher of plugin-container, but that's no longer necessary.
MozReview-Commit-ID: 3VRBAohRI9I
--HG--
extra : rebase_source : 58a30855ade14af4c4b1420edabd3abb398f232e
DeinitializeDecoder will now only be called if InitializeDecoder has been called first
MozReview-Commit-ID: 93WexomWp92
--HG--
extra : rebase_source : 96dfa5666041340d56fbfce7a46fb7f8f67181dc
Assigns the preference security.sandbox.logging.enabled and the environment variable MOZ_SANDBOX_LOGGING to control whether or not sandbox violations are logged. The pref defaults to true. On Linux, only the environment variable is considered.
--HG--
extra : rebase_source : f67870a74795228548b290aec32d08552c068874
This removes the open of PGMPContent from PGMP, the bridge of
PGMPService and PGMP from PGMPContent, and the spawn of PGMP from
PGMPService. I did these changes all at once because the way the
bridges works it was hard to split it up.
--HG--
extra : rebase_source : d9311e3047b9855ad422838f5a8b6bfdc382d225
We already do this in GMPParent::ReadGMPInfoFile(), and I neglected to check
this in the Chromium/Widevine manifest parsing code. This means we won't add
the GMPParent to our list of GMPParents, and so
navigator.requestMediaKeySystemAccess won't advertise that we support Widevine.
MozReview-Commit-ID: 7x7pbO5vC5e
--HG--
extra : rebase_source : 6d220066d01921d67f0ccf917cb94da887ea01a8
Turns on sandbox denial logging if security.sandbox.logging.enabled is true.
Removes most sandbox violation messages but some related messages generated
by other processes will still get through.
--HG--
extra : rebase_source : 4f06e70d53b0f500cc85a869c5bd7f8ea20d8341
This basically rolls back aec9905b06fe from bug 1278198.
MozReview-Commit-ID: Drho21X6npW
--HG--
extra : rebase_source : 372bc7f4771ec0268535e3df2a745bc9fae8bd3b
This was only to support legacy storage for the Adobe GMP, and we don't support that any more.
MozReview-Commit-ID: BQLTDq535Qa
--HG--
extra : rebase_source : df73267af09847487e78513e774baa209c700a76
Now that we're not supporting Adobe EME anymore, we don't need to
provide a mechanism for GMPs to block browser shutdown.
MozReview-Commit-ID: KUC94IBQiod
--HG--
extra : rebase_source : ed521f28e272de11b2d0c4546b98baf6bd7c6e72
We were seeing almost permaorange failures in the WebRTC H.264/GMP tests
due to the GMP being shutdown in the parent process in between the
content process performing an OOP select operation and then performing
an OOP launch operation.
That is, in GeckoMediaPluginServiceChild::GetContentParent() in between
the SendSelectGMP completing and the SendLaunchGMP completing, the GMP
would shutdown and so when the launch operation ran in the main process
it would fail.
The select and launch are seperate operations so that the crash handler
can be reported to the content process and an association can be made
in the content process between the plugin ID and the crash helper before
we try to launch the GMP. This is so that if the GMP crashes on startup,
we're ready to handle the crash.
However it turns out that if the GMP crashes on startup, the crash report
message comes in after another round of the event/IPC message loop. So we
actually do have time in the content process to connect the crash helper
after the launch fails.
So in order to fix the problem of the GMP shutting down in between select
and launch, we can partially revert the changes I made in Bug 1267918 to
merge selecting and launching GMPs back into a single operation.
MozReview-Commit-ID: 5n4T1Gqlvr3
--HG--
extra : rebase_source : 6e6892a5e32a485b5bfc2f93bddb2d2fe5a422bd
In a similar vein to the previous patch, while we're waiting on a
GetContentParent promise to resolve, we don't want the GMPParent
to shutdown. So make IsUsed() check whether we're waiting on a
GetContentParent promise to resolve, so we don't pull the rug out
from under any code waiting to get a content parent to bridge a
GMP.
MozReview-Commit-ID: 8cTCuXLXMsK
--HG--
extra : rebase_source : 8cc04d57ea1ef4e48c7ff088dbb12eabe4b3b223
extra : source : f79a51d9bd193024f7359ba6ff75076b15d15faf
When GMPService::GetContentParent returns a MozPromise, we end up failing in
test_peerConnection_scaleResolution.html with e10s enabled because we Close()
the GMPContentParent twice. The test causes two GMPVideoEncoderParents to
be created. When the number of IPDL actors on the GMPContentParent reach 0,
we close the IPC connection. With GetContentParent() returning a MozPromise,
it's more async, and so we can end up requesting the content parent in order
to create the second GMPVideoEncoderParent, but while we're waiting for
the promise to resolve the previous GMPVideoEncoderParent is destroyed and
the GMPContentParent closes its IPC connection. Then the GetContentParent
promise resolves, and that fails to operate correctly since it's closed its
IPC connection.
My solution here is to add a "blocker" that prevents the GMPContentParent from
being shutdown while we're waiting for the GetContentParent promise to resolve.
MozReview-Commit-ID: HxBkFkmv0tV
--HG--
extra : rebase_source : 59aa7bcfe8b8f44274d136d6147a946542a64fff
extra : source : 59ab10349b58b0fbe13dca9312ec82332f7c3dbe
The mochitest harness on Windows sets MOZ_GMP_PATH to paths with a mixture of
Windows and UNIX dir separators, and the NS_NewLocalFile() call in
GMPServiceParent::AddOnGMPThread() fails on this input.
We've had this problem before, and if we fixed the test harness to give us
input with platform specific line endings somebody would likely just break this
again someday and have this issue again, so just make the GMP service normalize
the paths it's given to have consistent dir separators.
This makes test_peerConnection_basicH264Video.html pass when run
locally on my Windows machine.
MozReview-Commit-ID: 88hSvTdZuWg
--HG--
extra : rebase_source : 2cf63ccd1155e59f9745163cf4a28d3bdb7012ba
We will use the new type for the generated IPDL message handler
prototype to make sure correct error handling method is called.
MozReview-Commit-ID: AzVbApxFGZ0
This change ensures that we don't create a new random node Id for every
MediaKeys object using Widevine - which has the effect of ensuring
Widevine CDMs that are same origin get created in the same process, and
that persistent storage can be used and retrieved.
MozReview-Commit-ID: K55rkcu9jWo
--HG--
extra : rebase_source : ebca24d2eeb4acd5fb14e0063cf2065c419853b1
Store a mapping of decryptor ID to the CDM instance that the corresponding
WidevineDecryptor is using. This allows us to link GMPDecryptor instances
with the corresponding GMPVideoDecoder.
The CDM is stored inside the CDMWrapper, so that we destroy the CDM instance
when the last reference to the CDM is dropped.
MozReview-Commit-ID: FQYzh77yjoC
--HG--
extra : rebase_source : 772d4bead18a9b88e7f9ee30b0f169a192322e24
Retrieve the ID of the GMPDecryptor from the GMPCDMProxy, and pass that
through to the GMPVideoDecoder's constructor.
MozReview-Commit-ID: IuNsSroZ9Zu
--HG--
extra : rebase_source : d678628dec67a059aec06918f07ea93ecc54a5f9
This enables us to identify GMPDecryptor instances in the child process, so that
in a later patch when we create a GMPVideoDecoder instance, we can associate it
with a GMPDecryptor. Then the cdm::ContentDecryptionModule8 instance that these
two actors are adapted to can know whom it's supposed to respond to.
We use the IPDL PGMPDecryptorChild actor ID as the GMPDecryptor's ID. This is unique
per GMP process, which is sufficient.
MozReview-Commit-ID: 7NKND9VjPUW
--HG--
extra : rebase_source : da14d9a8a7313a609e30649af1a23e79b3e401fe
This change ensures that we don't create a new random node Id for every
MediaKeys object using Widevine - which has the effect of ensuring
Widevine CDMs that are same origin get created in the same process, and
that persistent storage can be used and retrieved.
MozReview-Commit-ID: K55rkcu9jWo
--HG--
extra : rebase_source : 9bd789d05d1f5ed0a00eeb9870668e6335e899e6
Store a mapping of decryptor ID to the CDM instance that the corresponding
WidevineDecryptor is using. This allows us to link GMPDecryptor instances
with the corresponding GMPVideoDecoder.
The CDM is stored inside the CDMWrapper, so that we destroy the CDM instance
when the last reference to the CDM is dropped.
MozReview-Commit-ID: FQYzh77yjoC
--HG--
extra : rebase_source : 7e8c264200e904a4f5a1311f11cd317d98df9791
Retrieve the ID of the GMPDecryptor from the GMPCDMProxy, and pass that
through to the GMPVideoDecoder's constructor.
MozReview-Commit-ID: IuNsSroZ9Zu
--HG--
extra : rebase_source : 6f1db4a019deaedac07fa15c1958270268dcb941
This enables us to identify GMPDecryptor instances in the child process, so that
in a later patch when we create a GMPVideoDecoder instance, we can associate it
with a GMPDecryptor. Then the cdm::ContentDecryptionModule8 instance that these
two actors are adapted to can know whom it's supposed to respond to.
We use the IPDL PGMPDecryptorChild actor ID as the GMPDecryptor's ID. This is unique
per GMP process, which is sufficient.
MozReview-Commit-ID: 7NKND9VjPUW
--HG--
extra : rebase_source : 6ea7dfa358f8d13f7d36db5a581fc075268038b7
I've repeated myself a few times, so make a helper to make determining which
GMPs are available easier.
MozReview-Commit-ID: 2fFLeaA5o8u
--HG--
extra : rebase_source : 74ea0b429d339273535610df3bbd7fec7beae469
This ensures that when requests for keysystem access in the content process
retry, they do so on an up-to-date set of capabilities.
MozReview-Commit-ID: JxmlZnFhKYs
--HG--
extra : rebase_source : 6e02777be6a0692c7e157d3ab0a1952c3017c208
We only have one version of a GMP installed at once anyway. This version code
is to support Adobe Primetime, and that's disabled.
Making the behaviour of this code simpler will make it easier to have the
same behaviour of the GetGMPVersion code in e10s and non-e10s mode.
I will be removing this code soon, but I will do that in a later patch, so as
to not complicate the uplift.
MozReview-Commit-ID: 3cn7GhihWzm
--HG--
extra : rebase_source : f4e3470794a2a3dd1d97b8e78fe21df887854dc0
In order to avoid doing a synchronous call from content process to chrome
process in order to determine what GMPs are usable, maintain a cache of GMP
capabilities in the content processes.
We must seed the cache when content processes are created, as the GMP service
is started up and GMPs are added to it before the first (or any subsequent)
content process is created.
MozReview-Commit-ID: Eb4Pu81XHmn
--HG--
extra : rebase_source : ef5de4dd17ee337ca378569691e55d4cfb7939ef