Since the semantics of `ContentParent::MarkAsDead` are significantly different
from `GeckoProcessManager::MarkAsDead`, let's rename the latter to better
reflect what it actually does.
Differential Revision: https://phabricator.services.mozilla.com/D92649
- What this patch does:
Send a "download-suspended-by-cache" signal to the media-element paired
with a cloned cache-stream right after the cache-stream is cloned. That
signal help the media-element to decide if it's ready to enter the
HAVE_ENOUGH_DATA state and fire a "canplaythrough" event.
- Why this patch is needed:
It solves a WPT timeout issue. That WPT test waits a "canplaythrough"
event.
- What problem this patch solves is:
This patch addresses the problem mentioned in [1].
Each media-element pairs with a cache-stream downloading the data it
needs. When the data-download of a cache-stream gets suspended, the
media-element paired with the cache-stream should be notified. This
notification is one of the factor to move the ready-state of the
media-element to HAVE_ENOUGH_DATA. However, the media-element paired
with a cloned cache-stream may never receive this notification. And the
worst is that it may never have a chance to download enough data it
needs to move the ready-state to HAVE_ENOUGH_DATA as well at the same
time, in the case mentioned below.
This can happen when a media-element paired with a cloned cache-stream
is created after the cache is full, and all of the cache-streams are
suspended. (Cloned cache-stream is a cache-stream cloned from another
cache-stream that shares the same underlying data with it since their
paired media-elements have the same `src` (of `HTMLMediaElement`)).
"canplaythrough" event is fired when the ready-state is transited to
HAVE_ENOUGH_DATA. As a result, it will never be fired in this case.
In usual case, if the cache-stream gets suspended from non-suspended, it
will send a "download-suspended-by-cache=true" signal to its paired
media-element when running `MediaCache::Update()`. In fact, all other
cache-streams sharing the same underlying data will send this signal at
the same time if necessary. (Later, once the cache-stream is resumed
from suspended to non-suspended it will send a
"download-suspended-by-cache=false" signal to its paired media-element.
All other cache-streams sharing the same underlying data will do the
same if necessary.) The media-element keeps tracking that signal it
receives. After the first-frame of the media-element is loaded, the
ready-state of the media-element will be transited to HAVE_ENOUGH_DATA
by force if the signal is true. (Otherwise, the ready-state will be
inferred by other information.)
When cloning a cache-stream from another one, the cloned cache-stream is
suspended by default. If it's added to a jammed cache that all of the
cache-streams are suspended since the cache is full, then it never has a
chance to fire the "download-suspended-by-cache" signal. Both its
source-stream and itself have no status-change between suspended and
non-suspended, so `MediaCache::Update` is unable to send the signal.
In this case, we should force the media-element paired with the newly
cloned cache-stream transits its ready-state to HAVE_ENOUGH_DATA, which
follows the existing mechanism, by queueing a status-change-update when
cloning the stream. The status-change-update will be run in the
`MediaCache::Update` and it will check what the signal should be sent.
Once the "download-suspended-by-cache=true" is sent, the
"canplaythrough" event of the media-element can be dispatched after
its first-frame is loaded. (The event listeners can possibly make the
media-element starts playing, which is likely to cause a cache-seek that
can revitalize the cache eventually.)
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1646719#c5
Differential Revision: https://phabricator.services.mozilla.com/D92125
Add some logs that can help us to check how many cache-streams are
suspended and the working status in `MediaCache::Update`
Differential Revision: https://phabricator.services.mozilla.com/D92124
Without this patch, the FrameEncode gtest encoder won't encode the last bit of
data, making the total duration 20ms too short.
When passing EOS we encode the lookahead worth of silence, so this patch also
accounts for that.
Differential Revision: https://phabricator.services.mozilla.com/D91957
Without this patch, there could be an input rate leading to the use of a
resampler *and* framesLeft being 0 (not rounding up). Then we end up with too
little data to feed the resampler, and we fail an assert.
Differential Revision: https://phabricator.services.mozilla.com/D91956
Without this patch, the lookahead silence would be added to the source segment,
which is sampled at the input rate.
The lookahead is in the output rate.
Converting the lookahead to the input rate, and letting our regular encoding
logic convert it back to the output rate can lead to a rounding error for some
input rates. A rounding error here would lead to the last packet being too
short.
Differential Revision: https://phabricator.services.mozilla.com/D91725
There should be no accumulating rounding error here since packet durations are
exactly 200ms which does not lead to a rounding error when converting to
microseconds.
But this is for sanity, since the behavior prior to this patch is exactly how
you get an accumulating rounding error.
Differential Revision: https://phabricator.services.mozilla.com/D91955
Certain logic, like AudioOutputFramesPerPacket(), is hinged off whether a
resampler exists. The resampler is destroyed when encoding is completed, making
such logic flawed from that point. To avoid this potential footgun we can decide
the output rate at construction time, since the input rate is known then.
Differential Revision: https://phabricator.services.mozilla.com/D91953
AudioTrackEncoder uses GetPacketDuration() for signaling upwards that data is
available to be encoded. Data to be encoded is sampled at the input rate while
GetPacketDuration() is the duration in the output rate.
Meanwhile, OpusTrackEncoder uses GetPacketDuration() internally for deciding how
much data to encode. This is after resampling so correctly in the output rate.
To support both these cases, this patch adds NumOutputFramesPerPacket(), modeled
on GetOutputSampleRate(), denoting the packet duration in the output rate.
GetPacketDuration() is renamed to NumInputFramesPerPacket() and changed to be
the packet duration in the input rate.
Differential Revision: https://phabricator.services.mozilla.com/D91952
Without this patch there was a gap between the default-ctor and when real values
got set. If setting a member was forgotten, it would have needed an audit to be
found. With this patch the compiler will make sure all values have been
explicitly handed to the ctor.
mFrameData (nsTArray) becomes Refcountable to allow VP8TrackEncoder to extend
the duration of an EncodedFrame by copying the last frame's ref and constructing
a new EncodedFrame with a longer duration than the last one's.
Differential Revision: https://phabricator.services.mozilla.com/D91724
This was originally handled by EbmlComposer. Since bug 1014393 this was handled
by MediaEncoder. By doing it in OpusTrackEncoder we can avoid reading hardcoded
fields in the opus metadata to get the codec delay value.
Differential Revision: https://phabricator.services.mozilla.com/D91723
When close the event source, it should be responsible to clear up and reset the virtual control interface, rather than doing so by `Media Control Server` via setting some empty results.
Differential Revision: https://phabricator.services.mozilla.com/D92116
The old way to open/close the event source, which is triggered by controller amount change event, is less intuitive, and we do the extra clean up when close the event source by assigning some parameters [1] that causes an issue on Windows where the control interface can't be clear up completely.
Each platform has its own way to clean the interface. For example, on Windows, we can simply call `ISystemMediaTransportControlsDisplayUpdater::ClearAll()`. So calling those functions actually helps nothing. The best way to do that is to ask the event source to do the clean up, rathering than setting those unnecessary parameters.
Therefore, we make it happen closer to when we determine or clear main controller and ask the event source to take a responsible to clean up when it closes.
[1] https://searchfox.org/mozilla-central/rev/35245411b9e8a911fe3f5adb0632c3394f8b4ccb/dom/media/mediacontrol/MediaControlService.cpp#410-413
Differential Revision: https://phabricator.services.mozilla.com/D92115
As we've chosen another way for GeckoView implementation, so `SetControlledTabBrowsingContextId` is no longer needed.
Differential Revision: https://phabricator.services.mozilla.com/D92114
When close the event source, it should be responsible to clear up and reset the virtual control interface, rather than doing so by `Media Control Server` via setting some empty results.
Differential Revision: https://phabricator.services.mozilla.com/D92116
The old way to open/close the event source, which is triggered by controller amount change event, is less intuitive, and we do the extra clean up when close the event source by assigning some parameters [1] that causes an issue on Windows where the control interface can't be clear up completely.
Each platform has its own way to clean the interface. For example, on Windows, we can simply call `ISystemMediaTransportControlsDisplayUpdater::ClearAll()`. So calling those functions actually helps nothing. The best way to do that is to ask the event source to do the clean up, rathering than setting those unnecessary parameters.
Therefore, we make it happen closer to when we determine or clear main controller and ask the event source to take a responsible to clean up when it closes.
[1] https://searchfox.org/mozilla-central/rev/35245411b9e8a911fe3f5adb0632c3394f8b4ccb/dom/media/mediacontrol/MediaControlService.cpp#410-413
Differential Revision: https://phabricator.services.mozilla.com/D92115
As we've chosen another way for GeckoView implementation, so `SetControlledTabBrowsingContextId` is no longer needed.
Differential Revision: https://phabricator.services.mozilla.com/D92114
This avoids us risking an overflow when we convert encrypted media with
subsamples to AnnexB (since that conversion can grow the clear sizes of the
sample). See the test in the preceding patch for an example of how and why this
happens.
Differential Revision: https://phabricator.services.mozilla.com/D92300
Add test code to ensure AnnexB conversions behave as we expect. This adds some
coverage for non-encrypted conversions that we only tested with broader tests
until now. It also adds a test to ensure we don't overflow our subsample sizes
when dealing with encrypted media with very large subsamples. This latter test
covers the issue seen in bug 1618529.
Differential Revision: https://phabricator.services.mozilla.com/D92299
`mBlockOwnersWatermark` is introduced in bug 1366936 for telemetry `MEDIACACHE_BLOCKOWNERS_WATERMARK` but `MEDIACACHE_BLOCKOWNERS_WATERMARK` is removed in bug 1356046
Depends on D92106
Differential Revision: https://phabricator.services.mozilla.com/D92107
`mIndexWatermark` was introduced in bug 1366929 for telemetry `MEDIACACHE_WATERMARK_KB` but `MEDIACACHE_WATERMARK_KB` was removed in bug 1356046
Depends on D92105
Differential Revision: https://phabricator.services.mozilla.com/D92106
These were originally masked due to an exception in
tools/lint/file-whitespace.html for the directory dom/media/tests, but we
don't need an exception for our new location (dom/media/webrtc/tests/mochitests)
if we fix these 3 files.
2 files had bad line endings (Windows vs Unix):
dom/media/webrtc/tests/mochitests/test_getUserMedia_cubebDisabled.html
dom/media/webrtc/tests/mochitests/test_getUserMedia_cubebDisabledFakeStreams.html
1 file had trailing whitespace:
dom/media/webrtc/tests/mochitests/test_peerConnection_threeUnbundledConnections.html
Differential Revision: https://phabricator.services.mozilla.com/D90630
Currently changing the pref `media.hardwaremediakeys.enabled` would only take effect after restarting Firefox, this patch would make it possible to do so during runtime.
Differential Revision: https://phabricator.services.mozilla.com/D91841
This is mostly preparing for the future state where we might have SWGL WR mixed with real hardware webrender, and want a way to lookup the state per-compositor.
Differential Revision: https://phabricator.services.mozilla.com/D92009
This adds a script to vendor libwebrtc and chromium/src/build from upstream
Google or GitHub repositories as well as from local repositories.
Differential Revision: https://phabricator.services.mozilla.com/D91611
This is mostly preparing for the future state where we might have SWGL WR mixed with real hardware webrender, and want a way to lookup the state per-compositor.
Depends on D92008
Differential Revision: https://phabricator.services.mozilla.com/D92009
Fix up includes so AnnexB.h and TestMediaDataEncoder.cpp don't rely on unified
build order. Reformat include lists to match style guide. Rework #include guard
on AnnexB.h to reflect style guide.
Differential Revision: https://phabricator.services.mozilla.com/D91825
An ArrayOfRemoteAudioData pack all its AudioData objects into a single Shmem.
This Shmem will be-reused by the remote decoder over and over.
When used with webaudio, this reduces the number of memory allocation from 100 to 1 for each remote decoder.
Differential Revision: https://phabricator.services.mozilla.com/D91539
The reduce the amount of shmem allocations to 1 instead of 100 when used with webaudio.
We also fix the AudioTrimmer for remote decoders as the trimming information was lost over the IPC serialization.
Another issue corrected is that we no longer crash if the first MediaRawData decoded had an empty size.
Differential Revision: https://phabricator.services.mozilla.com/D91538
Those two objects can be used to pack multiple array of objects into a minimal amount of shmem.
An ArrayOfRemoteByteBuffer will take at most a single shmem and perform a single memory allocation..
Similarly, an ArrayOfMediaRawData will pack multiple MediaRawData in at most 3 shmems (one for each array of bytes a MediaRawData contains).
They are designed to work in combination with a ShmemPool which will own each of the allocated Shmem and so can be re-used over and over.
Differential Revision: https://phabricator.services.mozilla.com/D91537
We don't need `autoplay_would_be_allowed_count` and `autoplay_would_not_be_allowed_count`, so we can remove all related codes.
Differential Revision: https://phabricator.services.mozilla.com/D83227
These were originally masked due to an exception in
tools/lint/file-whitespace.html for the directory dom/media/tests, but we
don't need an exception for our new location (dom/media/webrtc/tests/mochitests)
if we fix these 3 files.
2 files had bad line endings (Windows vs Unix):
dom/media/webrtc/tests/mochitests/test_getUserMedia_cubebDisabled.html
dom/media/webrtc/tests/mochitests/test_getUserMedia_cubebDisabledFakeStreams.html
1 file had trailing whitespace:
dom/media/webrtc/tests/mochitests/test_peerConnection_threeUnbundledConnections.html
Differential Revision: https://phabricator.services.mozilla.com/D90630
These were originally masked due to an exception in
tools/lint/file-whitespace.html for the directory dom/media/tests, but we
don't need an exception for our new location (dom/media/webrtc/tests/mochitests)
if we fix these 3 files.
2 files had bad line endings (Windows vs Unix):
dom/media/webrtc/tests/mochitests/test_getUserMedia_cubebDisabled.html
dom/media/webrtc/tests/mochitests/test_getUserMedia_cubebDisabledFakeStreams.html
1 file had trailing whitespace:
dom/media/webrtc/tests/mochitests/test_peerConnection_threeUnbundledConnections.html
Differential Revision: https://phabricator.services.mozilla.com/D90630
One second is a bit short given that starting bluetooth input devices takes
several seconds on some platforms. Bug 1586370 may have worsened this since we
now process silence while waiting for an input to give us data.
Differential Revision: https://phabricator.services.mozilla.com/D91140
Attempt to lock textures from the same D3D11 devices on multiple threads at once can lead to deadlocks as observed with AMD cards.
Differential Revision: https://phabricator.services.mozilla.com/D91098
Because of D90771, we need media session to notifty its status correctly in order to deactivate the controller. Therefore, when its document becomes inactive (in bfcahce), we should treat media session as inactive and notify it to `MediaStatusManager` in order to clear the active media session if needed.
In addition, add some assertions to ensure we won't modify or set any attributes on media session when its document is inactive.
Differential Revision: https://phabricator.services.mozilla.com/D90926
Currently, we only keep controller active when it has controlled media. That strategy works well for non-media session situation because only controlled media need to listen to media keys.
However, when having media session, thing goes slightly different. When we don't have any controlled media, active media session may still listen to media keys and do the corresponding operation. Therefore, we should keep the active media session being able to receive media keys even if the controlled media has gone and deactivate a controller when it doesn't have controlled media and no active media session.
Example, play a audible media first, then press `next track`, if media session is going to play an inaudible media (which is not a controllable media, so no controlled media is existing), we still want media session to receive and handle media keys for users.
Differential Revision: https://phabricator.services.mozilla.com/D90771
In particular, this removes the code that was limiting the audibility
notifications spam, because this is handled by the AudibilityMonitor.
Differential Revision: https://phabricator.services.mozilla.com/D90433
This is essentially the same code as the interleaved version, but backwards,
since the memory layout is the opposite, we want to take advantage of memory
locality, and only touch audio samples once each.
The `ProcessAudioData` method has been renamed, because of the similarity
between the `AudioData` type and `ProcessAudioData`, since it can now process
`AudioBlock`s.
Differential Revision: https://phabricator.services.mozilla.com/D90431
Because of D90771, we need media session to notifty its status correctly in order to deactivate the controller. Therefore, when its document becomes inactive (in bfcahce), we should treat media session as inactive and notify it to `MediaStatusManager` in order to clear the active media session if needed.
In addition, add some assertions to ensure we won't modify or set any attributes on media session when its document is inactive.
Differential Revision: https://phabricator.services.mozilla.com/D90926
Currently, we only keep controller active when it has controlled media. That strategy works well for non-media session situation because only controlled media need to listen to media keys.
However, when having media session, thing goes slightly different. When we don't have any controlled media, active media session may still listen to media keys and do the corresponding operation. Therefore, we should keep the active media session being able to receive media keys even if the controlled media has gone and deactivate a controller when it doesn't have controlled media and no active media session.
Example, play a audible media first, then press `next track`, if media session is going to play an inaudible media (which is not a controllable media, so no controlled media is existing), we still want media session to receive and handle media keys for users.
Differential Revision: https://phabricator.services.mozilla.com/D90771
The audible listener should be kept when the media sink is available. Disconnecting the listener on `StopMediaSink()` would make us not able to receive the audible change after starting the media sink again.
Therefore, we should only disconnect audible listener when we recreate a media sink or shutdown.
Differential Revision: https://phabricator.services.mozilla.com/D90575
This has busted the MingW build, where webrtc is not supported.
AudioInputProcessing relies on upstream webrtc for audio processing: aec, etc.
Differential Revision: https://phabricator.services.mozilla.com/D90852
This [0] commit unties the device selection from the fact that a stream
transports voice data. Telling cubeb that the stream has voice data allows
lowering the complexity of the resampler, and lowering the impact of the
resampler on the latency.
[0]: ac3569ef18
Differential Revision: https://phabricator.services.mozilla.com/D89591
In particular, this removes the code that was limiting the audibility
notifications spam, because this is handled by the AudibilityMonitor.
Differential Revision: https://phabricator.services.mozilla.com/D90433
This is essentially the same code as the interleaved version, but backwards,
since the memory layout is the opposite, we want to take advantage of memory
locality, and only touch audio samples once each.
The `ProcessAudioData` method has been renamed, because of the similarity
between the `AudioData` type and `ProcessAudioData`, since it can now process
`AudioBlock`s.
Differential Revision: https://phabricator.services.mozilla.com/D90431
The platform thread API documentation says that the thread priority must be
set after the thread starts, and checks this with an assertion. I'm guessing
that this behaviour was different at some point in the past, and this code
ended upbeing commented out rather than fixed up during an update.
Differential Revision: https://phabricator.services.mozilla.com/D90869
If we make _maxFPSNeeded an atomic member variable, then we can avoid taking
a lock in DesktopCaptureImple::process().
Differential Revision: https://phabricator.services.mozilla.com/D90868
This isn't strictly necessary because the max fps needed calculation is being
moved into an atomic member variable, but for consistently, it would be good
to take the API lock in every API call.
Differential Revision: https://phabricator.services.mozilla.com/D90866
We have both _apiCs and _callBackCs in DesktopCaptureImpl, presumably for
historical reasons, since one is protected and the other private. Since there
are no subclasses of DesktopCaptureImpl, one private CriticalSection is
sufficient here.
Differential Revision: https://phabricator.services.mozilla.com/D90865
Fix ""error: member access into incomplete type 'mozilla::layers::IGPUVideoSurfaceManager" build bustage with --disable-accessibility"
we don't want to fully declare the class in the header as it would require to leak most of gfx headers.
Differential Revision: https://phabricator.services.mozilla.com/D90776
We unfortunately can't use the AsyncShutdownService in either the GPU or RDD process.
So we add a little utility class AsyncBlockers that will resolve its promise once all services have deregistered from it.
We use it to temporily suspend the RDDParent or GPUParent from killing the process, up to 10s.
This allows for cleaner shutdown as the parent process doesn't guarantee the order in which processes are killed (even though it should).
Differential Revision: https://phabricator.services.mozilla.com/D90487
The RDD process gets shutdown following a NS_XPCOM_SHUTDOWN_OBSERVER_ID notification.
Notifications are processed in LIFO order, since the RDD process is started on demand it would have typically be registered after a content process.
We must ensure that the RDD get shutdown after all content processes so that it can receive notifications that the RemoteDecoderManagerChilds are shutting down.
Differential Revision: https://phabricator.services.mozilla.com/D90485
We unfortunately can't use the AsyncShutdownService in either the GPU or RDD process.
So we add a little utility class AsyncBlockers that will resolve its promise once all services have deregistered from it.
We use it to temporily suspend the RDDParent or GPUParent from killing the process, up to 10s.
This allows for cleaner shutdown as the parent process doesn't guarantee the order in which processes are killed (even though it should).
Differential Revision: https://phabricator.services.mozilla.com/D90487
The RDD process gets shutdown following a NS_XPCOM_SHUTDOWN_OBSERVER_ID notification.
Notifications are processed in LIFO order, since the RDD process is started on demand it would have typically be registered after a content process.
We must ensure that the RDD get shutdown after all content processes so that it can receive notifications that the RemoteDecoderManagerChilds are shutting down.
Differential Revision: https://phabricator.services.mozilla.com/D90485
Removes
- MEDIACACHESTREAM_LENGTH_KB
- MEDIA_PLAY_PROMISE_RESOLUTION
- AUDIO_TRACK_SILENCE_PROPORTION
and all the code I could find that was specific to reporting these values via
telemetry.
Differential Revision: https://phabricator.services.mozilla.com/D88868
This [0] commit unties the device selection from the fact that a stream
transports voice data. Telling cubeb that the stream has voice data allows
lowering the complexity of the resampler, and lowering the impact of the
resampler on the latency.
[0]: ac3569ef18
Differential Revision: https://phabricator.services.mozilla.com/D89591
This [0] commit unties the device selection from the fact that a stream
transports voice data. Telling cubeb that the stream has voice data allows
lowering the complexity of the resampler, and lowering the impact of the
resampler on the latency.
[0]: ac3569ef18
Differential Revision: https://phabricator.services.mozilla.com/D89591
This patch simplifies the logic by reducing the number of paths, from two to
one method that changes the correction. This method now weighs the calculated
correction by 60% and the previous correction by 40% when setting the new value,
to provide a negative feedback loop that stabilizes around the desired amount of
buffering.
This leads to a smoother output with less noticable changes in the correction
value, while still reaching the desired amount of buffering quickly.
Tests are updated with new expectations accordingly.
Differential Revision: https://phabricator.services.mozilla.com/D89776
The removed logic complicates tests, as the correction code does not always
strive to reach the desired buffering level. If the changes are small enough the
buffer continues to shrink or grow until we get close to the edge and then a
much more abrupt correction change is applied, something noticable by ear.
Differential Revision: https://phabricator.services.mozilla.com/D89775
With this patch, there is a fake audio thread present on a MockCubeb context as
soon as one MockCubebStream is running under that context. When the last running
MockCubebStream is stopped, the fake audio thread is joined and unset.
This adds a tad bit of complexity but results in zero unwanted drift between
MockCubebStreams under the same MockCubeb context. This is essential for stable
CrossGraphTrack tests.
A side effect of this is that the drift factor of a MockCubebStream does not
affect the interval at which data is processed, but rather the amount of data
processed each interval.
This patch also allows us to process data with virtually no wait time between
iterations (as opposed to wall-time 10ms-waits), for (much) speedier tests.
Differential Revision: https://phabricator.services.mozilla.com/D89772
The resampled source clock can become negative with a large desired buffer and
a small current buffer. This patch clamps it above 1 so we never drift-correct
in the wrong direction.
Differential Revision: https://phabricator.services.mozilla.com/D89765
Without this patch we return 0, and that can be misinterpreted by
AudioDriftCorrection so it thinks we have drifted a lot. This becomes more
obvious with a large desired buffer.
Differential Revision: https://phabricator.services.mozilla.com/D89759
This patch lets us pass in a drift factor to allow for testing of the drift
correction code. It also enables output verification for the CrossGraph tests.
Differential Revision: https://phabricator.services.mozilla.com/D89756
In a CrossGraphReceiver there is 100ms worth of buffering in AudioChunks.
Without this patch the graph will buffer 2400 frames in each track before
removing data from them. If a graph contains a CrossGraphReceiver and runs at a
sample rate lower than 24000Hz, that CrossGraphReceiver will run out of chunks
and an assertion failure happens at best.
Differential Revision: https://phabricator.services.mozilla.com/D89755
Because an audio driver starts out with its fallback driver running the graph,
we might use unnecessary amounts of silence for the verification. Especially
with the `GoFaster()` mode turned on, as the fallback driver's thread runs
rarely compared to how often we are feeding the graph audio data from the
MockCubebStream.
Differential Revision: https://phabricator.services.mozilla.com/D89753
The unittest does not verify that the input is forwarded to the output of the
CrossGraphReceiver because it is not easy to get the corresponding
MockCubebStream. This has been left as future work.
Depends on D85557
Differential Revision: https://phabricator.services.mozilla.com/D85558
Use the newly added functionality in MockCubeb to verify that the input is
forwarded to the output.
Depends on D85556
Differential Revision: https://phabricator.services.mozilla.com/D85557
With this patch, AudioGenerator is used to create a sine tone audio input to a
duplex stream. In parallel, the AudioVerifier is used to verify that this sine
tone exists in the output (on demand).
This is the first approach. Fancier generators/verifiers can be future work.
Depends on D85554
Differential Revision: https://phabricator.services.mozilla.com/D85555
This is useful in order to be used by the low-level part of the stack
(MockCubeb) where the buffers contain interleaved channels.
Depends on D85553
Differential Revision: https://phabricator.services.mozilla.com/D85554
The existing AudioGenerator takes over the job of AudioToneGenerator.
AudioToneVerifier becomes AudioVerifier to match the naming pattern.
In order to reuse the functionality for other tests.
Depends on D85552
Differential Revision: https://phabricator.services.mozilla.com/D85553
In addition to that remove it from the exclude list of the whitespace sanity check assuming that the dos EOL had made it fail.
Differential Revision: https://phabricator.services.mozilla.com/D85552
When rejecting promises in ChromiumCDMProxy we pass an exception to MediaKeys to
represent the error that took place. However, if we do not have a any keys the
exception is not passed anywhere. Since these exceptions will assert on
destruction that they were handled we need to explicitly suppress the exception
when we don't have MediaKeys to avoid firing asserts.
The case we hit this issue in is during browser shutdown, so I think it makes
sense to ignore the exception. This is not a case of simply ignoring an
exception when it can be handled, this is that we're in a state where various
machinery is becoming unavailable and where it makes sense to not try and send
the exception any further.
Differential Revision: https://phabricator.services.mozilla.com/D90156
When rejecting promises in ChromiumCDMProxy we pass an exception to MediaKeys to
represent the error that took place. However, if we do not have a any keys the
exception is not passed anywhere. Since these exceptions will assert on
destruction that they were handled we need to explicitly suppress the exception
when we don't have MediaKeys to avoid firing asserts.
The case we hit this issue in is during browser shutdown, so I think it makes
sense to ignore the exception. This is not a case of simply ignoring an
exception when it can be handled, this is that we're in a state where various
machinery is becoming unavailable and where it makes sense to not try and send
the exception any further.
Differential Revision: https://phabricator.services.mozilla.com/D90156