2014-04-25 18:09:30 +04:00
|
|
|
/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
|
|
|
|
/* This Source Code Form is subject to the terms of the Mozilla Public
|
|
|
|
* License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
|
|
|
* You can obtain one at http://mozilla.org/MPL/2.0/. */
|
|
|
|
|
|
|
|
#include <MediaStreamGraphImpl.h>
|
2015-11-27 07:40:30 +03:00
|
|
|
#include "mozilla/dom/AudioContext.h"
|
2016-01-21 19:51:36 +03:00
|
|
|
#include "mozilla/SharedThreadPool.h"
|
|
|
|
#include "mozilla/ClearOnShutdown.h"
|
2014-08-26 19:01:33 +04:00
|
|
|
#include "CubebUtils.h"
|
2014-04-25 18:09:30 +04:00
|
|
|
|
2014-08-26 19:02:31 +04:00
|
|
|
#ifdef XP_MACOSX
|
|
|
|
#include <sys/sysctl.h>
|
|
|
|
#endif
|
|
|
|
|
2015-11-15 16:49:01 +03:00
|
|
|
extern mozilla::LazyLogModule gMediaStreamGraphLog;
|
2015-05-21 23:22:04 +03:00
|
|
|
#define STREAM_LOG(type, msg) MOZ_LOG(gMediaStreamGraphLog, type, msg)
|
2014-04-25 18:09:30 +04:00
|
|
|
|
2014-08-31 16:19:48 +04:00
|
|
|
// We don't use NSPR log here because we want this interleaved with adb logcat
|
|
|
|
// on Android/B2G
|
|
|
|
// #define ENABLE_LIFECYCLE_LOG
|
|
|
|
#ifdef ENABLE_LIFECYCLE_LOG
|
|
|
|
#ifdef ANDROID
|
|
|
|
#include "android/log.h"
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
#define LIFECYCLE_LOG(...) __android_log_print(ANDROID_LOG_INFO, "Gecko - MSG" , __VA_ARGS__); printf(__VA_ARGS__);printf("\n");
|
2014-08-31 16:19:48 +04:00
|
|
|
#else
|
|
|
|
#define LIFECYCLE_LOG(...) printf(__VA_ARGS__);printf("\n");
|
|
|
|
#endif
|
|
|
|
#else
|
|
|
|
#define LIFECYCLE_LOG(...)
|
|
|
|
#endif
|
|
|
|
|
2014-04-25 18:09:30 +04:00
|
|
|
namespace mozilla {
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
StaticRefPtr<nsIThreadPool> AsyncCubebTask::sThreadPool;
|
|
|
|
|
2014-04-25 20:03:04 +04:00
|
|
|
struct AutoProfilerUnregisterThread
|
|
|
|
{
|
|
|
|
// The empty ctor is used to silence a pre-4.8.0 GCC unused variable warning.
|
|
|
|
AutoProfilerUnregisterThread()
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
~AutoProfilerUnregisterThread()
|
|
|
|
{
|
|
|
|
profiler_unregister_thread();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2014-04-25 18:09:30 +04:00
|
|
|
GraphDriver::GraphDriver(MediaStreamGraphImpl* aGraphImpl)
|
2014-08-26 19:01:33 +04:00
|
|
|
: mIterationStart(0),
|
|
|
|
mIterationEnd(0),
|
2014-08-25 17:26:21 +04:00
|
|
|
mGraphImpl(aGraphImpl),
|
|
|
|
mWaitState(WAITSTATE_RUNNING),
|
2016-01-21 19:51:36 +03:00
|
|
|
mAudioInput(nullptr),
|
2014-08-26 19:01:33 +04:00
|
|
|
mCurrentTimeStamp(TimeStamp::Now()),
|
|
|
|
mPreviousDriver(nullptr),
|
|
|
|
mNextDriver(nullptr)
|
2014-08-25 17:26:21 +04:00
|
|
|
{ }
|
2014-04-25 18:09:30 +04:00
|
|
|
|
2014-08-26 19:01:33 +04:00
|
|
|
void GraphDriver::SetGraphTime(GraphDriver* aPreviousDriver,
|
|
|
|
GraphTime aLastSwitchNextIterationStart,
|
2015-08-13 07:23:17 +03:00
|
|
|
GraphTime aLastSwitchNextIterationEnd)
|
2014-08-26 19:01:33 +04:00
|
|
|
{
|
2015-12-01 13:47:59 +03:00
|
|
|
GraphImpl()->GetMonitor().AssertCurrentThreadOwns();
|
2014-08-26 19:01:33 +04:00
|
|
|
// We set mIterationEnd here, because the first thing a driver do when it
|
|
|
|
// does an iteration is to update graph times, so we are in fact setting
|
|
|
|
// mIterationStart of the next iteration by setting the end of the previous
|
|
|
|
// iteration.
|
|
|
|
mIterationStart = aLastSwitchNextIterationStart;
|
|
|
|
mIterationEnd = aLastSwitchNextIterationEnd;
|
|
|
|
|
2015-12-01 13:47:31 +03:00
|
|
|
MOZ_ASSERT(!PreviousDriver());
|
2016-01-21 19:51:36 +03:00
|
|
|
MOZ_ASSERT(aPreviousDriver);
|
|
|
|
|
|
|
|
STREAM_LOG(LogLevel::Debug, ("Setting previous driver: %p (%s)",
|
|
|
|
aPreviousDriver,
|
|
|
|
aPreviousDriver->AsAudioCallbackDriver()
|
|
|
|
? "AudioCallbackDriver"
|
|
|
|
: "SystemClockDriver"));
|
2015-12-01 13:47:31 +03:00
|
|
|
SetPreviousDriver(aPreviousDriver);
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void GraphDriver::SwitchAtNextIteration(GraphDriver* aNextDriver)
|
|
|
|
{
|
2015-12-01 13:47:59 +03:00
|
|
|
GraphImpl()->GetMonitor().AssertCurrentThreadOwns();
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
// This is the situation where `mPreviousDriver` is an AudioCallbackDriver
|
|
|
|
// that is switching device, and the graph has found the current driver is not
|
|
|
|
// an AudioCallbackDriver, but tries to switch to a _new_ AudioCallbackDriver
|
|
|
|
// because it found audio has to be output. In this case, simply ignore the
|
|
|
|
// request to switch, since we know we will switch back to the old
|
|
|
|
// AudioCallbackDriver when it has recovered from the device switching.
|
|
|
|
if (aNextDriver->AsAudioCallbackDriver() &&
|
2015-12-01 13:47:31 +03:00
|
|
|
PreviousDriver() &&
|
|
|
|
PreviousDriver()->AsAudioCallbackDriver()->IsSwitchingDevice() &&
|
|
|
|
PreviousDriver() != aNextDriver) {
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
return;
|
|
|
|
}
|
2014-08-31 16:19:48 +04:00
|
|
|
LIFECYCLE_LOG("Switching to new driver: %p (%s)",
|
|
|
|
aNextDriver, aNextDriver->AsAudioCallbackDriver() ?
|
|
|
|
"AudioCallbackDriver" : "SystemClockDriver");
|
2016-01-06 11:20:20 +03:00
|
|
|
if (mNextDriver &&
|
|
|
|
mNextDriver != GraphImpl()->CurrentDriver()) {
|
|
|
|
LIFECYCLE_LOG("Discarding previous next driver: %p (%s)",
|
|
|
|
mNextDriver.get(), mNextDriver->AsAudioCallbackDriver() ?
|
|
|
|
"AudioCallbackDriver" : "SystemClockDriver");
|
|
|
|
}
|
2015-12-01 13:47:31 +03:00
|
|
|
SetNextDriver(aNextDriver);
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
2015-08-13 07:23:17 +03:00
|
|
|
GraphTime
|
|
|
|
GraphDriver::StateComputedTime() const
|
2014-08-26 19:01:33 +04:00
|
|
|
{
|
2015-08-13 07:23:17 +03:00
|
|
|
return mGraphImpl->mStateComputedTime;
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void GraphDriver::EnsureNextIteration()
|
|
|
|
{
|
2014-09-28 20:07:25 +04:00
|
|
|
mGraphImpl->EnsureNextIteration();
|
2014-04-25 18:09:30 +04:00
|
|
|
}
|
|
|
|
|
2014-09-03 17:52:43 +04:00
|
|
|
void GraphDriver::Shutdown()
|
|
|
|
{
|
|
|
|
if (AsAudioCallbackDriver()) {
|
|
|
|
LIFECYCLE_LOG("Releasing audio driver off main thread (GraphDriver::Shutdown).\n");
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<AsyncCubebTask> releaseEvent =
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
new AsyncCubebTask(AsAudioCallbackDriver(), AsyncCubebOperation::SHUTDOWN);
|
2016-01-22 10:39:42 +03:00
|
|
|
releaseEvent->Dispatch(NS_DISPATCH_SYNC);
|
2014-09-10 17:20:34 +04:00
|
|
|
} else {
|
|
|
|
Stop();
|
2014-09-03 17:52:43 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-12-01 13:47:31 +03:00
|
|
|
bool GraphDriver::Switching()
|
|
|
|
{
|
|
|
|
GraphImpl()->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
return mNextDriver || mPreviousDriver;
|
|
|
|
}
|
|
|
|
|
|
|
|
GraphDriver* GraphDriver::NextDriver()
|
|
|
|
{
|
|
|
|
GraphImpl()->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
return mNextDriver;
|
|
|
|
}
|
|
|
|
|
|
|
|
GraphDriver* GraphDriver::PreviousDriver()
|
|
|
|
{
|
|
|
|
GraphImpl()->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
return mPreviousDriver;
|
|
|
|
}
|
|
|
|
|
|
|
|
void GraphDriver::SetNextDriver(GraphDriver* aNextDriver)
|
|
|
|
{
|
|
|
|
GraphImpl()->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
mNextDriver = aNextDriver;
|
|
|
|
}
|
|
|
|
|
|
|
|
void GraphDriver::SetPreviousDriver(GraphDriver* aPreviousDriver)
|
|
|
|
{
|
|
|
|
GraphImpl()->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
mPreviousDriver = aPreviousDriver;
|
|
|
|
}
|
|
|
|
|
2014-09-03 17:52:43 +04:00
|
|
|
ThreadedDriver::ThreadedDriver(MediaStreamGraphImpl* aGraphImpl)
|
|
|
|
: GraphDriver(aGraphImpl)
|
|
|
|
{ }
|
|
|
|
|
|
|
|
ThreadedDriver::~ThreadedDriver()
|
|
|
|
{
|
|
|
|
if (mThread) {
|
|
|
|
mThread->Shutdown();
|
|
|
|
}
|
|
|
|
}
|
2014-04-25 20:03:04 +04:00
|
|
|
class MediaStreamGraphInitThreadRunnable : public nsRunnable {
|
|
|
|
public:
|
2014-04-25 20:04:53 +04:00
|
|
|
explicit MediaStreamGraphInitThreadRunnable(ThreadedDriver* aDriver)
|
2014-04-25 20:03:04 +04:00
|
|
|
: mDriver(aDriver)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
NS_IMETHOD Run()
|
|
|
|
{
|
|
|
|
char aLocal;
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("Starting system thread"));
|
2014-04-25 20:03:04 +04:00
|
|
|
profiler_register_thread("MediaStreamGraph", &aLocal);
|
2014-08-31 16:19:48 +04:00
|
|
|
LIFECYCLE_LOG("Starting a new system driver for graph %p\n",
|
|
|
|
mDriver->mGraphImpl);
|
2015-12-01 13:47:31 +03:00
|
|
|
|
|
|
|
GraphDriver* previousDriver = nullptr;
|
|
|
|
{
|
|
|
|
MonitorAutoLock mon(mDriver->mGraphImpl->GetMonitor());
|
|
|
|
previousDriver = mDriver->PreviousDriver();
|
|
|
|
}
|
|
|
|
if (previousDriver) {
|
2014-08-31 16:19:48 +04:00
|
|
|
LIFECYCLE_LOG("%p releasing an AudioCallbackDriver(%p), for graph %p\n",
|
|
|
|
mDriver,
|
2015-12-01 13:47:31 +03:00
|
|
|
previousDriver,
|
2014-08-31 16:19:48 +04:00
|
|
|
mDriver->GraphImpl());
|
2014-08-26 19:02:07 +04:00
|
|
|
MOZ_ASSERT(!mDriver->AsAudioCallbackDriver());
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
// Stop and release the previous driver off-main-thread, but only if we're
|
|
|
|
// not in the situation where we've fallen back to a system clock driver
|
|
|
|
// because the osx audio stack is currently switching output device.
|
2015-12-01 13:47:31 +03:00
|
|
|
if (!previousDriver->AsAudioCallbackDriver()->IsSwitchingDevice()) {
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<AsyncCubebTask> releaseEvent =
|
2015-12-01 13:47:31 +03:00
|
|
|
new AsyncCubebTask(previousDriver->AsAudioCallbackDriver(), AsyncCubebOperation::SHUTDOWN);
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
releaseEvent->Dispatch();
|
2015-12-01 13:47:31 +03:00
|
|
|
|
|
|
|
MonitorAutoLock mon(mDriver->mGraphImpl->GetMonitor());
|
|
|
|
mDriver->SetPreviousDriver(nullptr);
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
}
|
2014-08-26 19:02:09 +04:00
|
|
|
} else {
|
|
|
|
MonitorAutoLock mon(mDriver->mGraphImpl->GetMonitor());
|
|
|
|
MOZ_ASSERT(mDriver->mGraphImpl->MessagesQueued(), "Don't start a graph without messages queued.");
|
|
|
|
mDriver->mGraphImpl->SwapMessageQueues();
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
2014-04-25 20:03:04 +04:00
|
|
|
mDriver->RunThread();
|
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
private:
|
2014-04-25 20:04:53 +04:00
|
|
|
ThreadedDriver* mDriver;
|
2014-04-25 20:03:04 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
void
|
2014-04-25 20:04:53 +04:00
|
|
|
ThreadedDriver::Start()
|
2014-04-25 20:03:04 +04:00
|
|
|
{
|
2014-08-31 16:19:48 +04:00
|
|
|
LIFECYCLE_LOG("Starting thread for a SystemClockDriver %p\n", mGraphImpl);
|
2014-04-25 20:03:04 +04:00
|
|
|
nsCOMPtr<nsIRunnable> event = new MediaStreamGraphInitThreadRunnable(this);
|
2014-09-28 20:07:24 +04:00
|
|
|
// Note: mThread may be null during event->Run() if we pass to NewNamedThread! See AudioInitTask
|
|
|
|
nsresult rv = NS_NewNamedThread("MediaStreamGrph", getter_AddRefs(mThread));
|
|
|
|
if (NS_SUCCEEDED(rv)) {
|
|
|
|
mThread->Dispatch(event, NS_DISPATCH_NORMAL);
|
|
|
|
}
|
2014-04-25 20:03:04 +04:00
|
|
|
}
|
|
|
|
|
2014-08-26 19:02:07 +04:00
|
|
|
void
|
|
|
|
ThreadedDriver::Resume()
|
|
|
|
{
|
|
|
|
Start();
|
|
|
|
}
|
|
|
|
|
2014-04-25 18:09:30 +04:00
|
|
|
void
|
2014-08-26 19:01:33 +04:00
|
|
|
ThreadedDriver::Revive()
|
2014-04-25 18:09:30 +04:00
|
|
|
{
|
2014-09-28 20:07:24 +04:00
|
|
|
// Note: only called on MainThread, without monitor
|
|
|
|
// We know were weren't in a running state
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("AudioCallbackDriver reviving."));
|
2014-08-26 19:01:33 +04:00
|
|
|
// If we were switching, switch now. Otherwise, tell thread to run the main
|
|
|
|
// loop again.
|
2014-09-28 20:07:24 +04:00
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
2015-12-01 13:47:31 +03:00
|
|
|
if (NextDriver()) {
|
|
|
|
NextDriver()->SetGraphTime(this, mIterationStart, mIterationEnd);
|
|
|
|
mGraphImpl->SetCurrentDriver(NextDriver());
|
|
|
|
NextDriver()->Start();
|
2014-08-26 19:01:33 +04:00
|
|
|
} else {
|
|
|
|
nsCOMPtr<nsIRunnable> event = new MediaStreamGraphInitThreadRunnable(this);
|
|
|
|
mThread->Dispatch(event, NS_DISPATCH_NORMAL);
|
|
|
|
}
|
2014-04-25 20:03:04 +04:00
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
void
|
|
|
|
ThreadedDriver::RemoveCallback()
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2014-04-25 20:03:04 +04:00
|
|
|
void
|
2014-04-25 20:04:53 +04:00
|
|
|
ThreadedDriver::Stop()
|
2014-04-25 20:03:04 +04:00
|
|
|
{
|
|
|
|
NS_ASSERTION(NS_IsMainThread(), "Must be called on main thread");
|
|
|
|
// mGraph's thread is not running so it's OK to do whatever here
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("Stopping threads for MediaStreamGraph %p", this));
|
2014-04-25 20:03:04 +04:00
|
|
|
|
|
|
|
if (mThread) {
|
|
|
|
mThread->Shutdown();
|
2014-09-28 20:07:24 +04:00
|
|
|
mThread = nullptr;
|
2014-04-25 20:03:04 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-04-25 20:04:53 +04:00
|
|
|
SystemClockDriver::SystemClockDriver(MediaStreamGraphImpl* aGraphImpl)
|
|
|
|
: ThreadedDriver(aGraphImpl),
|
|
|
|
mInitialTimeStamp(TimeStamp::Now()),
|
2014-08-26 19:01:33 +04:00
|
|
|
mLastTimeStamp(TimeStamp::Now())
|
2014-04-25 20:04:53 +04:00
|
|
|
{}
|
|
|
|
|
|
|
|
SystemClockDriver::~SystemClockDriver()
|
|
|
|
{ }
|
|
|
|
|
2014-04-25 20:03:04 +04:00
|
|
|
void
|
2014-04-25 20:04:53 +04:00
|
|
|
ThreadedDriver::RunThread()
|
2014-04-25 20:03:04 +04:00
|
|
|
{
|
|
|
|
AutoProfilerUnregisterThread autoUnregister;
|
|
|
|
|
2014-04-25 20:04:23 +04:00
|
|
|
bool stillProcessing = true;
|
|
|
|
while (stillProcessing) {
|
2015-07-23 08:15:49 +03:00
|
|
|
mIterationStart = IterationEnd();
|
|
|
|
mIterationEnd += GetIntervalForIteration();
|
|
|
|
|
2015-08-13 07:23:17 +03:00
|
|
|
GraphTime stateComputedTime = StateComputedTime();
|
|
|
|
if (stateComputedTime < mIterationEnd) {
|
2015-07-23 08:15:49 +03:00
|
|
|
STREAM_LOG(LogLevel::Warning, ("Media graph global underrun detected"));
|
2015-08-13 07:23:17 +03:00
|
|
|
mIterationEnd = stateComputedTime;
|
2015-07-23 08:15:49 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (mIterationStart >= mIterationEnd) {
|
|
|
|
NS_ASSERTION(mIterationStart == mIterationEnd ,
|
|
|
|
"Time can't go backwards!");
|
|
|
|
// This could happen due to low clock resolution, maybe?
|
|
|
|
STREAM_LOG(LogLevel::Debug, ("Time did not advance"));
|
|
|
|
}
|
2014-04-25 20:04:23 +04:00
|
|
|
|
2015-07-29 08:13:23 +03:00
|
|
|
GraphTime nextStateComputedTime =
|
2014-04-25 20:04:23 +04:00
|
|
|
mGraphImpl->RoundUpToNextAudioBlock(
|
2015-07-23 08:15:49 +03:00
|
|
|
mIterationEnd + mGraphImpl->MillisecondsToMediaTime(AUDIO_TARGET_MS));
|
2015-08-04 10:54:54 +03:00
|
|
|
if (nextStateComputedTime < stateComputedTime) {
|
|
|
|
// A previous driver may have been processing further ahead of
|
|
|
|
// iterationEnd.
|
|
|
|
STREAM_LOG(LogLevel::Warning,
|
|
|
|
("Prevent state from going backwards. interval[%ld; %ld] state[%ld; %ld]",
|
|
|
|
(long)mIterationStart, (long)mIterationEnd,
|
|
|
|
(long)stateComputedTime, (long)nextStateComputedTime));
|
|
|
|
nextStateComputedTime = stateComputedTime;
|
|
|
|
}
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug,
|
2014-08-26 19:01:33 +04:00
|
|
|
("interval[%ld; %ld] state[%ld; %ld]",
|
|
|
|
(long)mIterationStart, (long)mIterationEnd,
|
2015-08-13 07:23:17 +03:00
|
|
|
(long)stateComputedTime, (long)nextStateComputedTime));
|
2014-04-25 20:04:23 +04:00
|
|
|
|
2015-08-04 10:42:10 +03:00
|
|
|
stillProcessing = mGraphImpl->OneIteration(nextStateComputedTime);
|
2014-08-26 19:01:33 +04:00
|
|
|
|
2015-12-01 13:47:31 +03:00
|
|
|
MonitorAutoLock lock(GraphImpl()->GetMonitor());
|
|
|
|
if (NextDriver() && stillProcessing) {
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("Switching to AudioCallbackDriver"));
|
2016-01-21 19:51:36 +03:00
|
|
|
RemoveCallback();
|
2015-12-01 13:47:31 +03:00
|
|
|
NextDriver()->SetGraphTime(this, mIterationStart, mIterationEnd);
|
|
|
|
mGraphImpl->SetCurrentDriver(NextDriver());
|
|
|
|
NextDriver()->Start();
|
2014-08-26 19:01:33 +04:00
|
|
|
return;
|
|
|
|
}
|
2014-04-25 20:04:23 +04:00
|
|
|
}
|
2014-04-25 18:09:30 +04:00
|
|
|
}
|
|
|
|
|
2015-07-23 08:15:49 +03:00
|
|
|
MediaTime
|
|
|
|
SystemClockDriver::GetIntervalForIteration()
|
2014-04-25 18:09:30 +04:00
|
|
|
{
|
|
|
|
TimeStamp now = TimeStamp::Now();
|
2015-07-23 08:15:49 +03:00
|
|
|
MediaTime interval =
|
|
|
|
mGraphImpl->SecondsToMediaTime((now - mCurrentTimeStamp).ToSeconds());
|
2014-04-25 18:09:30 +04:00
|
|
|
mCurrentTimeStamp = now;
|
|
|
|
|
2015-07-23 08:15:49 +03:00
|
|
|
MOZ_LOG(gMediaStreamGraphLog, LogLevel::Verbose,
|
2015-08-13 07:23:17 +03:00
|
|
|
("Updating current time to %f (real %f, StateComputedTime() %f)",
|
2015-07-23 08:15:49 +03:00
|
|
|
mGraphImpl->MediaTimeToSeconds(IterationEnd() + interval),
|
|
|
|
(now - mInitialTimeStamp).ToSeconds(),
|
|
|
|
mGraphImpl->MediaTimeToSeconds(StateComputedTime())));
|
2014-04-25 18:09:30 +04:00
|
|
|
|
2015-07-23 08:15:49 +03:00
|
|
|
return interval;
|
2014-04-25 18:09:30 +04:00
|
|
|
}
|
|
|
|
|
2014-04-25 20:03:04 +04:00
|
|
|
TimeStamp
|
2014-08-26 19:01:33 +04:00
|
|
|
OfflineClockDriver::GetCurrentTimeStamp()
|
2014-04-25 20:03:04 +04:00
|
|
|
{
|
2014-08-26 19:01:33 +04:00
|
|
|
MOZ_CRASH("This driver does not support getting the current timestamp.");
|
|
|
|
return TimeStamp();
|
2014-04-25 20:03:04 +04:00
|
|
|
}
|
|
|
|
|
2014-04-25 18:09:30 +04:00
|
|
|
void
|
|
|
|
SystemClockDriver::WaitForNextIteration()
|
|
|
|
{
|
2014-08-26 19:02:30 +04:00
|
|
|
mGraphImpl->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
|
2014-04-25 18:09:30 +04:00
|
|
|
PRIntervalTime timeout = PR_INTERVAL_NO_TIMEOUT;
|
|
|
|
TimeStamp now = TimeStamp::Now();
|
2014-09-28 20:07:25 +04:00
|
|
|
if (mGraphImpl->mNeedAnotherIteration) {
|
2014-04-25 18:09:30 +04:00
|
|
|
int64_t timeoutMS = MEDIA_GRAPH_TARGET_PERIOD_MS -
|
|
|
|
int64_t((now - mCurrentTimeStamp).ToMilliseconds());
|
|
|
|
// Make sure timeoutMS doesn't overflow 32 bits by waking up at
|
|
|
|
// least once a minute, if we need to wake up at all
|
|
|
|
timeoutMS = std::max<int64_t>(0, std::min<int64_t>(timeoutMS, 60*1000));
|
|
|
|
timeout = PR_MillisecondsToInterval(uint32_t(timeoutMS));
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Verbose, ("Waiting for next iteration; at %f, timeout=%f", (now - mInitialTimeStamp).ToSeconds(), timeoutMS/1000.0));
|
2014-09-28 20:07:25 +04:00
|
|
|
if (mWaitState == WAITSTATE_WAITING_INDEFINITELY) {
|
|
|
|
mGraphImpl->mGraphDriverAsleep = false; // atomic
|
|
|
|
}
|
2014-04-25 18:09:30 +04:00
|
|
|
mWaitState = WAITSTATE_WAITING_FOR_NEXT_ITERATION;
|
|
|
|
} else {
|
2014-09-28 20:07:25 +04:00
|
|
|
mGraphImpl->mGraphDriverAsleep = true; // atomic
|
2014-04-25 18:09:30 +04:00
|
|
|
mWaitState = WAITSTATE_WAITING_INDEFINITELY;
|
|
|
|
}
|
|
|
|
if (timeout > 0) {
|
2014-08-26 19:01:33 +04:00
|
|
|
mGraphImpl->GetMonitor().Wait(timeout);
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Verbose, ("Resuming after timeout; at %f, elapsed=%f",
|
2014-04-25 18:09:30 +04:00
|
|
|
(TimeStamp::Now() - mInitialTimeStamp).ToSeconds(),
|
|
|
|
(TimeStamp::Now() - now).ToSeconds()));
|
|
|
|
}
|
|
|
|
|
2014-09-28 20:07:25 +04:00
|
|
|
if (mWaitState == WAITSTATE_WAITING_INDEFINITELY) {
|
|
|
|
mGraphImpl->mGraphDriverAsleep = false; // atomic
|
|
|
|
}
|
2014-04-25 18:09:30 +04:00
|
|
|
mWaitState = WAITSTATE_RUNNING;
|
2014-09-28 20:07:25 +04:00
|
|
|
mGraphImpl->mNeedAnotherIteration = false;
|
2014-04-25 18:09:30 +04:00
|
|
|
}
|
|
|
|
|
2015-12-01 13:47:31 +03:00
|
|
|
void SystemClockDriver::WakeUp()
|
2014-04-25 18:09:30 +04:00
|
|
|
{
|
2014-08-26 19:02:31 +04:00
|
|
|
mGraphImpl->GetMonitor().AssertCurrentThreadOwns();
|
2014-04-25 18:09:30 +04:00
|
|
|
mWaitState = WAITSTATE_WAKING_UP;
|
2014-09-28 20:07:25 +04:00
|
|
|
mGraphImpl->mGraphDriverAsleep = false; // atomic
|
2014-08-26 19:01:33 +04:00
|
|
|
mGraphImpl->GetMonitor().Notify();
|
2014-04-25 18:09:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
OfflineClockDriver::OfflineClockDriver(MediaStreamGraphImpl* aGraphImpl, GraphTime aSlice)
|
2014-04-25 20:04:53 +04:00
|
|
|
: ThreadedDriver(aGraphImpl),
|
2014-04-25 18:09:30 +04:00
|
|
|
mSlice(aSlice)
|
|
|
|
{
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:35 +03:00
|
|
|
class MediaStreamGraphShutdownThreadRunnable : public nsRunnable {
|
2014-08-26 19:01:33 +04:00
|
|
|
public:
|
2016-01-21 19:51:35 +03:00
|
|
|
explicit MediaStreamGraphShutdownThreadRunnable(nsIThread* aThread)
|
2014-08-26 19:01:33 +04:00
|
|
|
: mThread(aThread)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
NS_IMETHOD Run()
|
|
|
|
{
|
|
|
|
MOZ_ASSERT(NS_IsMainThread());
|
2014-09-28 20:07:24 +04:00
|
|
|
MOZ_ASSERT(mThread);
|
|
|
|
|
2014-08-26 19:01:33 +04:00
|
|
|
mThread->Shutdown();
|
2014-09-28 20:07:24 +04:00
|
|
|
mThread = nullptr;
|
2014-08-26 19:01:33 +04:00
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
private:
|
2014-09-28 20:07:24 +04:00
|
|
|
nsCOMPtr<nsIThread> mThread;
|
2014-08-26 19:01:33 +04:00
|
|
|
};
|
|
|
|
|
2014-04-25 18:09:30 +04:00
|
|
|
OfflineClockDriver::~OfflineClockDriver()
|
2014-08-26 19:01:33 +04:00
|
|
|
{
|
|
|
|
// transfer the ownership of mThread to the event
|
2014-09-28 20:07:24 +04:00
|
|
|
// XXX should use .forget()/etc
|
|
|
|
if (mThread) {
|
2016-01-21 19:51:35 +03:00
|
|
|
nsCOMPtr<nsIRunnable> event = new MediaStreamGraphShutdownThreadRunnable(mThread);
|
2014-09-28 20:07:24 +04:00
|
|
|
mThread = nullptr;
|
|
|
|
NS_DispatchToMainThread(event);
|
|
|
|
}
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
2014-04-25 18:09:30 +04:00
|
|
|
|
2015-07-23 08:15:49 +03:00
|
|
|
MediaTime
|
|
|
|
OfflineClockDriver::GetIntervalForIteration()
|
2014-04-25 18:09:30 +04:00
|
|
|
{
|
2015-07-23 08:15:49 +03:00
|
|
|
return mGraphImpl->MillisecondsToMediaTime(mSlice);
|
2014-04-25 18:09:30 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
OfflineClockDriver::WaitForNextIteration()
|
|
|
|
{
|
|
|
|
// No op: we want to go as fast as possible when we are offline
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
OfflineClockDriver::WakeUp()
|
|
|
|
{
|
|
|
|
MOZ_ASSERT(false, "An offline graph should not have to wake up.");
|
|
|
|
}
|
|
|
|
|
2014-09-03 17:52:43 +04:00
|
|
|
AsyncCubebTask::AsyncCubebTask(AudioCallbackDriver* aDriver, AsyncCubebOperation aOperation)
|
|
|
|
: mDriver(aDriver),
|
|
|
|
mOperation(aOperation),
|
|
|
|
mShutdownGrip(aDriver->GraphImpl())
|
|
|
|
{
|
2014-10-16 01:33:54 +04:00
|
|
|
NS_WARN_IF_FALSE(mDriver->mAudioStream || aOperation == INIT, "No audio stream !");
|
2014-09-03 17:52:43 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
AsyncCubebTask::~AsyncCubebTask()
|
|
|
|
{
|
|
|
|
}
|
2014-04-25 18:09:30 +04:00
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
/* static */
|
|
|
|
nsresult
|
|
|
|
AsyncCubebTask::EnsureThread()
|
2016-01-21 19:51:36 +03:00
|
|
|
{
|
2016-01-21 19:51:36 +03:00
|
|
|
if (!sThreadPool) {
|
|
|
|
nsCOMPtr<nsIThreadPool> threadPool =
|
|
|
|
SharedThreadPool::Get(NS_LITERAL_CSTRING("CubebOperation"), 1);
|
|
|
|
sThreadPool = threadPool;
|
|
|
|
// Need to null this out before xpcom-shutdown-threads Observers run
|
|
|
|
// since we don't know the order that the shutdown-threads observers
|
|
|
|
// will run. ClearOnShutdown guarantees it runs first.
|
|
|
|
if (!NS_IsMainThread()) {
|
|
|
|
NS_DispatchToMainThread(NS_NewRunnableFunction([]() -> void {
|
|
|
|
ClearOnShutdown(&sThreadPool, ShutdownPhase::ShutdownThreads);
|
|
|
|
}));
|
|
|
|
} else {
|
|
|
|
ClearOnShutdown(&sThreadPool, ShutdownPhase::ShutdownThreads);
|
|
|
|
}
|
|
|
|
|
|
|
|
const uint32_t kIdleThreadTimeoutMs = 2000;
|
|
|
|
|
|
|
|
nsresult rv = sThreadPool->SetIdleThreadTimeout(PR_MillisecondsToInterval(kIdleThreadTimeoutMs));
|
|
|
|
if (NS_WARN_IF(NS_FAILED(rv))) {
|
|
|
|
return rv;
|
|
|
|
}
|
2016-01-22 04:28:17 +03:00
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
|
|
|
NS_IMETHODIMP
|
|
|
|
AsyncCubebTask::Run()
|
|
|
|
{
|
2014-08-26 19:02:07 +04:00
|
|
|
MOZ_ASSERT(mDriver);
|
|
|
|
|
|
|
|
switch(mOperation) {
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
case AsyncCubebOperation::INIT: {
|
2016-01-06 11:20:20 +03:00
|
|
|
LIFECYCLE_LOG("AsyncCubebOperation::INIT driver=%p\n", mDriver.get());
|
2014-08-26 19:02:07 +04:00
|
|
|
mDriver->Init();
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
mDriver->CompleteAudioContextOperations(mOperation);
|
2014-08-26 19:02:07 +04:00
|
|
|
break;
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
}
|
|
|
|
case AsyncCubebOperation::SHUTDOWN: {
|
2016-01-06 11:20:20 +03:00
|
|
|
LIFECYCLE_LOG("AsyncCubebOperation::SHUTDOWN driver=%p\n", mDriver.get());
|
2014-08-26 19:02:07 +04:00
|
|
|
mDriver->Stop();
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
|
|
|
|
mDriver->CompleteAudioContextOperations(mOperation);
|
|
|
|
|
2014-08-26 19:02:07 +04:00
|
|
|
mDriver = nullptr;
|
2014-09-03 17:52:43 +04:00
|
|
|
mShutdownGrip = nullptr;
|
2014-08-26 19:02:07 +04:00
|
|
|
break;
|
2014-08-26 19:02:08 +04:00
|
|
|
}
|
2014-08-26 19:02:07 +04:00
|
|
|
default:
|
|
|
|
MOZ_CRASH("Operation not implemented.");
|
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
// The thread will kill itself after a bit
|
2014-08-26 19:02:07 +04:00
|
|
|
return NS_OK;
|
|
|
|
}
|
|
|
|
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
StreamAndPromiseForOperation::StreamAndPromiseForOperation(MediaStream* aStream,
|
|
|
|
void* aPromise,
|
|
|
|
dom::AudioContextOperation aOperation)
|
|
|
|
: mStream(aStream)
|
|
|
|
, mPromise(aPromise)
|
|
|
|
, mOperation(aOperation)
|
|
|
|
{
|
|
|
|
// MOZ_ASSERT(aPromise);
|
|
|
|
}
|
|
|
|
|
2015-10-23 06:43:15 +03:00
|
|
|
AudioCallbackDriver::AudioCallbackDriver(MediaStreamGraphImpl* aGraphImpl)
|
2014-08-26 19:02:08 +04:00
|
|
|
: GraphDriver(aGraphImpl)
|
2015-12-07 04:17:00 +03:00
|
|
|
, mSampleRate(0)
|
2014-09-09 20:16:01 +04:00
|
|
|
, mIterationDurationMS(MEDIA_GRAPH_TARGET_PERIOD_MS)
|
2014-08-26 19:02:08 +04:00
|
|
|
, mStarted(false)
|
2016-01-21 19:51:36 +03:00
|
|
|
, mAudioInput(nullptr)
|
2015-10-23 06:43:15 +03:00
|
|
|
, mAudioChannel(aGraphImpl->AudioChannel())
|
2016-01-21 19:51:36 +03:00
|
|
|
, mAddedMixer(false)
|
2014-08-26 19:02:30 +04:00
|
|
|
, mInCallback(false)
|
2015-12-07 04:17:00 +03:00
|
|
|
, mMicrophoneActive(false)
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
#ifdef XP_MACOSX
|
|
|
|
, mCallbackReceivedWhileSwitching(0)
|
|
|
|
#endif
|
2014-08-26 19:01:33 +04:00
|
|
|
{
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("AudioCallbackDriver ctor for graph %p", aGraphImpl));
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
AudioCallbackDriver::~AudioCallbackDriver()
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
{
|
|
|
|
MOZ_ASSERT(mPromisesForOperation.IsEmpty());
|
|
|
|
}
|
2014-08-26 19:01:33 +04:00
|
|
|
|
2014-08-26 19:02:07 +04:00
|
|
|
void
|
|
|
|
AudioCallbackDriver::Init()
|
2014-08-26 19:01:33 +04:00
|
|
|
{
|
2016-01-21 19:51:36 +03:00
|
|
|
cubeb_stream_params output;
|
|
|
|
cubeb_stream_params input;
|
2014-08-26 19:01:33 +04:00
|
|
|
uint32_t latency;
|
|
|
|
|
2014-08-26 19:02:07 +04:00
|
|
|
MOZ_ASSERT(!NS_IsMainThread(),
|
|
|
|
"This is blocking and should never run on the main thread.");
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
mSampleRate = output.rate = CubebUtils::PreferredSampleRate();
|
2014-08-26 19:01:33 +04:00
|
|
|
|
|
|
|
#if defined(__ANDROID__)
|
|
|
|
#if defined(MOZ_B2G)
|
2016-01-21 19:51:36 +03:00
|
|
|
output.stream_type = CubebUtils::ConvertChannelToCubebType(mAudioChannel);
|
2014-08-26 19:01:33 +04:00
|
|
|
#else
|
2016-01-21 19:51:36 +03:00
|
|
|
output.stream_type = CUBEB_STREAM_TYPE_MUSIC;
|
2014-08-26 19:01:33 +04:00
|
|
|
#endif
|
2016-01-21 19:51:36 +03:00
|
|
|
if (output.stream_type == CUBEB_STREAM_TYPE_MAX) {
|
2014-08-26 19:02:07 +04:00
|
|
|
NS_WARNING("Bad stream type");
|
|
|
|
return;
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
2014-08-26 19:02:08 +04:00
|
|
|
#else
|
|
|
|
(void)mAudioChannel;
|
2014-08-26 19:01:33 +04:00
|
|
|
#endif
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
output.channels = mGraphImpl->AudioChannelCount();
|
2014-08-26 19:01:33 +04:00
|
|
|
if (AUDIO_OUTPUT_FORMAT == AUDIO_FORMAT_S16) {
|
2016-01-21 19:51:36 +03:00
|
|
|
output.format = CUBEB_SAMPLE_S16NE;
|
2014-08-26 19:01:33 +04:00
|
|
|
} else {
|
2016-01-21 19:51:36 +03:00
|
|
|
output.format = CUBEB_SAMPLE_FLOAT32NE;
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
if (cubeb_get_min_latency(CubebUtils::GetCubebContext(), output, &latency) != CUBEB_OK) {
|
2014-08-26 19:01:33 +04:00
|
|
|
NS_WARNING("Could not get minimal latency from cubeb.");
|
2014-08-26 19:02:07 +04:00
|
|
|
return;
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
input = output;
|
|
|
|
input.channels = 1; // change to support optional stereo capture
|
2016-01-21 19:51:36 +03:00
|
|
|
|
2014-08-26 19:01:33 +04:00
|
|
|
cubeb_stream* stream;
|
2016-01-21 19:51:36 +03:00
|
|
|
// XXX Only pass input input if we have an input listener. Always
|
2016-01-21 19:51:36 +03:00
|
|
|
// set up output because it's easier, and it will just get silence.
|
|
|
|
// XXX Add support for adding/removing an input listener later.
|
2014-08-26 19:01:33 +04:00
|
|
|
if (cubeb_stream_init(CubebUtils::GetCubebContext(), &stream,
|
2016-01-21 19:51:36 +03:00
|
|
|
"AudioCallbackDriver",
|
2016-01-21 19:51:36 +03:00
|
|
|
mGraphImpl->mInputDeviceID,
|
2016-01-21 19:51:36 +03:00
|
|
|
mGraphImpl->mInputWanted ? &input : nullptr,
|
2016-01-21 19:51:36 +03:00
|
|
|
mGraphImpl->mOutputDeviceID,
|
2016-01-21 19:51:36 +03:00
|
|
|
mGraphImpl->mOutputWanted ? &output : nullptr, latency,
|
2014-08-26 19:01:33 +04:00
|
|
|
DataCallback_s, StateCallback_s, this) == CUBEB_OK) {
|
|
|
|
mAudioStream.own(stream);
|
|
|
|
} else {
|
2014-09-12 15:22:55 +04:00
|
|
|
NS_WARNING("Could not create a cubeb stream for MediaStreamGraph, falling back to a SystemClockDriver");
|
|
|
|
// Fall back to a driver using a normal thread.
|
2015-12-01 13:47:31 +03:00
|
|
|
MonitorAutoLock lock(GraphImpl()->GetMonitor());
|
|
|
|
SetNextDriver(new SystemClockDriver(GraphImpl()));
|
|
|
|
NextDriver()->SetGraphTime(this, mIterationStart, mIterationEnd);
|
|
|
|
mGraphImpl->SetCurrentDriver(NextDriver());
|
|
|
|
NextDriver()->Start();
|
2014-08-26 19:02:07 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-08-26 19:02:31 +04:00
|
|
|
cubeb_stream_register_device_changed_callback(mAudioStream,
|
|
|
|
AudioCallbackDriver::DeviceChangedCallback_s);
|
|
|
|
|
2014-08-26 19:02:30 +04:00
|
|
|
StartStream();
|
2014-08-26 19:02:07 +04:00
|
|
|
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("AudioCallbackDriver started."));
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
AudioCallbackDriver::Destroy()
|
|
|
|
{
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("AudioCallbackDriver destroyed."));
|
2014-08-26 19:01:33 +04:00
|
|
|
mAudioStream.reset();
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2014-08-26 19:02:07 +04:00
|
|
|
AudioCallbackDriver::Resume()
|
2014-08-26 19:01:33 +04:00
|
|
|
{
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("Resuming audio threads for MediaStreamGraph %p", mGraphImpl));
|
2014-08-26 19:01:33 +04:00
|
|
|
if (cubeb_stream_start(mAudioStream) != CUBEB_OK) {
|
|
|
|
NS_WARNING("Could not start cubeb stream for MSG.");
|
|
|
|
}
|
2014-08-26 19:02:07 +04:00
|
|
|
}
|
2014-08-26 19:01:33 +04:00
|
|
|
|
2014-08-26 19:02:07 +04:00
|
|
|
void
|
|
|
|
AudioCallbackDriver::Start()
|
|
|
|
{
|
2016-01-21 19:51:36 +03:00
|
|
|
if (mPreviousDriver) {
|
|
|
|
if (mPreviousDriver->AsAudioCallbackDriver()) {
|
|
|
|
LIFECYCLE_LOG("Releasing audio driver off main thread.");
|
|
|
|
RefPtr<AsyncCubebTask> releaseEvent =
|
|
|
|
new AsyncCubebTask(mPreviousDriver->AsAudioCallbackDriver(),
|
|
|
|
AsyncCubebOperation::SHUTDOWN);
|
|
|
|
releaseEvent->Dispatch();
|
|
|
|
mPreviousDriver = nullptr;
|
|
|
|
} else {
|
|
|
|
LIFECYCLE_LOG("Dropping driver reference for SystemClockDriver.");
|
|
|
|
mPreviousDriver = nullptr;
|
2016-01-22 04:28:23 +03:00
|
|
|
}
|
|
|
|
}
|
2016-01-21 19:51:36 +03:00
|
|
|
|
|
|
|
LIFECYCLE_LOG("Starting new audio driver off main thread, "
|
|
|
|
"to ensure it runs after previous shutdown.");
|
|
|
|
RefPtr<AsyncCubebTask> initEvent =
|
|
|
|
new AsyncCubebTask(AsAudioCallbackDriver(), AsyncCubebOperation::INIT);
|
|
|
|
initEvent->Dispatch();
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
2014-08-26 19:02:30 +04:00
|
|
|
void
|
|
|
|
AudioCallbackDriver::StartStream()
|
|
|
|
{
|
|
|
|
if (cubeb_stream_start(mAudioStream) != CUBEB_OK) {
|
|
|
|
MOZ_CRASH("Could not start cubeb stream for MSG.");
|
|
|
|
}
|
|
|
|
|
|
|
|
{
|
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
|
|
|
mStarted = true;
|
|
|
|
mWaitState = WAITSTATE_RUNNING;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-08-26 19:01:33 +04:00
|
|
|
void
|
|
|
|
AudioCallbackDriver::Stop()
|
|
|
|
{
|
|
|
|
if (cubeb_stream_stop(mAudioStream) != CUBEB_OK) {
|
|
|
|
NS_WARNING("Could not stop cubeb stream for MSG.");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
AudioCallbackDriver::Revive()
|
|
|
|
{
|
2014-09-28 20:07:24 +04:00
|
|
|
// Note: only called on MainThread, without monitor
|
|
|
|
// We know were weren't in a running state
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("AudioCallbackDriver reviving."));
|
2014-08-26 19:01:33 +04:00
|
|
|
// If we were switching, switch now. Otherwise, start the audio thread again.
|
2014-09-28 20:07:24 +04:00
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
2015-12-01 13:47:31 +03:00
|
|
|
if (NextDriver()) {
|
2016-01-21 19:51:36 +03:00
|
|
|
RemoveCallback();
|
2015-12-01 13:47:31 +03:00
|
|
|
NextDriver()->SetGraphTime(this, mIterationStart, mIterationEnd);
|
|
|
|
mGraphImpl->SetCurrentDriver(NextDriver());
|
|
|
|
NextDriver()->Start();
|
2014-08-26 19:01:33 +04:00
|
|
|
} else {
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("Starting audio threads for MediaStreamGraph %p from a new thread.", mGraphImpl));
|
2015-10-18 08:24:48 +03:00
|
|
|
RefPtr<AsyncCubebTask> initEvent =
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
new AsyncCubebTask(this, AsyncCubebOperation::INIT);
|
2014-09-28 20:07:24 +04:00
|
|
|
initEvent->Dispatch();
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
void
|
|
|
|
AudioCallbackDriver::RemoveCallback()
|
|
|
|
{
|
|
|
|
if (mAddedMixer) {
|
|
|
|
mGraphImpl->mMixer.RemoveCallback(this);
|
|
|
|
mAddedMixer = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
AudioCallbackDriver::WaitForNextIteration()
|
2014-08-26 19:02:08 +04:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2014-08-26 19:01:33 +04:00
|
|
|
void
|
|
|
|
AudioCallbackDriver::WakeUp()
|
|
|
|
{
|
2014-08-26 19:02:30 +04:00
|
|
|
mGraphImpl->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
mGraphImpl->GetMonitor().Notify();
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* static */ long
|
|
|
|
AudioCallbackDriver::DataCallback_s(cubeb_stream* aStream,
|
2016-01-21 19:51:36 +03:00
|
|
|
void* aUser,
|
|
|
|
const void* aInputBuffer,
|
|
|
|
void* aOutputBuffer,
|
2014-08-26 19:01:33 +04:00
|
|
|
long aFrames)
|
|
|
|
{
|
|
|
|
AudioCallbackDriver* driver = reinterpret_cast<AudioCallbackDriver*>(aUser);
|
2016-01-21 19:51:36 +03:00
|
|
|
return driver->DataCallback(static_cast<const AudioDataValue*>(aInputBuffer),
|
2016-01-21 19:51:36 +03:00
|
|
|
static_cast<AudioDataValue*>(aOutputBuffer), aFrames);
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* static */ void
|
|
|
|
AudioCallbackDriver::StateCallback_s(cubeb_stream* aStream, void * aUser,
|
|
|
|
cubeb_state aState)
|
|
|
|
{
|
|
|
|
AudioCallbackDriver* driver = reinterpret_cast<AudioCallbackDriver*>(aUser);
|
|
|
|
driver->StateCallback(aState);
|
|
|
|
}
|
|
|
|
|
2014-08-26 19:02:31 +04:00
|
|
|
/* static */ void
|
|
|
|
AudioCallbackDriver::DeviceChangedCallback_s(void* aUser)
|
|
|
|
{
|
|
|
|
AudioCallbackDriver* driver = reinterpret_cast<AudioCallbackDriver*>(aUser);
|
|
|
|
driver->DeviceChangedCallback();
|
|
|
|
}
|
|
|
|
|
2014-08-26 19:01:35 +04:00
|
|
|
bool AudioCallbackDriver::InCallback() {
|
|
|
|
return mInCallback;
|
|
|
|
}
|
|
|
|
|
|
|
|
AudioCallbackDriver::AutoInCallback::AutoInCallback(AudioCallbackDriver* aDriver)
|
|
|
|
: mDriver(aDriver)
|
|
|
|
{
|
|
|
|
mDriver->mInCallback = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
AudioCallbackDriver::AutoInCallback::~AutoInCallback() {
|
|
|
|
mDriver->mInCallback = false;
|
|
|
|
}
|
|
|
|
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
#ifdef XP_MACOSX
|
|
|
|
bool
|
|
|
|
AudioCallbackDriver::OSXDeviceSwitchingWorkaround()
|
|
|
|
{
|
|
|
|
MonitorAutoLock mon(GraphImpl()->GetMonitor());
|
|
|
|
if (mSelfReference) {
|
|
|
|
// Apparently, depending on the osx version, on device switch, the
|
|
|
|
// callback is called "some" number of times, and then stops being called,
|
|
|
|
// and then gets called again. 10 is to be safe, it's a low-enough number
|
|
|
|
// of milliseconds anyways (< 100ms)
|
2015-06-04 01:25:57 +03:00
|
|
|
//STREAM_LOG(LogLevel::Debug, ("Callbacks during switch: %d", mCallbackReceivedWhileSwitching+1));
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
if (mCallbackReceivedWhileSwitching++ >= 10) {
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("Got %d callbacks, switching back to CallbackDriver", mCallbackReceivedWhileSwitching));
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
// If we have a self reference, we have fallen back temporarily on a
|
|
|
|
// system clock driver, but we just got called back, that means the osx
|
|
|
|
// audio backend has switched to the new device.
|
|
|
|
// Ask the graph to switch back to the previous AudioCallbackDriver
|
|
|
|
// (`this`), and when the graph has effectively switched, we can drop
|
|
|
|
// the self reference and unref the SystemClockDriver we fallen back on.
|
|
|
|
if (GraphImpl()->CurrentDriver() == this) {
|
|
|
|
mSelfReference.Drop(this);
|
2015-12-01 13:47:31 +03:00
|
|
|
SetNextDriver(nullptr);
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
} else {
|
|
|
|
GraphImpl()->CurrentDriver()->SwitchAtNextIteration(this);
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
#endif // XP_MACOSX
|
|
|
|
|
2014-08-26 19:01:33 +04:00
|
|
|
long
|
2016-01-21 19:51:36 +03:00
|
|
|
AudioCallbackDriver::DataCallback(const AudioDataValue* aInputBuffer,
|
2016-01-21 19:51:36 +03:00
|
|
|
AudioDataValue* aOutputBuffer, long aFrames)
|
2014-08-26 19:01:33 +04:00
|
|
|
{
|
|
|
|
bool stillProcessing;
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
// Don't add the callback until we're inited and ready
|
|
|
|
if (!mAddedMixer) {
|
|
|
|
mGraphImpl->mMixer.AddCallback(this);
|
|
|
|
mAddedMixer = true;
|
|
|
|
}
|
|
|
|
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
#ifdef XP_MACOSX
|
|
|
|
if (OSXDeviceSwitchingWorkaround()) {
|
2016-01-21 19:51:36 +03:00
|
|
|
PodZero(aOutputBuffer, aFrames * mGraphImpl->AudioChannelCount());
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
return aFrames;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-09-28 20:07:24 +04:00
|
|
|
#ifdef DEBUG
|
|
|
|
// DebugOnly<> doesn't work here... it forces an initialization that will cause
|
|
|
|
// mInCallback to be set back to false before we exit the statement. Do it by
|
|
|
|
// hand instead.
|
|
|
|
AutoInCallback aic(this);
|
|
|
|
#endif
|
2014-08-26 19:01:35 +04:00
|
|
|
|
2015-08-13 07:23:17 +03:00
|
|
|
GraphTime stateComputedTime = StateComputedTime();
|
|
|
|
if (stateComputedTime == 0) {
|
2014-08-26 19:01:33 +04:00
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
|
|
|
// Because this function is called during cubeb_stream_init (to prefill the
|
|
|
|
// audio buffers), it can be that we don't have a message here (because this
|
|
|
|
// driver is the first one for this graph), and the graph would exit. Simply
|
|
|
|
// return here until we have messages.
|
|
|
|
if (!mGraphImpl->MessagesQueued()) {
|
2016-01-21 19:51:36 +03:00
|
|
|
PodZero(aOutputBuffer, aFrames * mGraphImpl->AudioChannelCount());
|
2014-08-26 19:01:33 +04:00
|
|
|
return aFrames;
|
|
|
|
}
|
|
|
|
mGraphImpl->SwapMessageQueues();
|
|
|
|
}
|
|
|
|
|
|
|
|
uint32_t durationMS = aFrames * 1000 / mSampleRate;
|
|
|
|
|
|
|
|
// For now, simply average the duration with the previous
|
|
|
|
// duration so there is some damping against sudden changes.
|
|
|
|
if (!mIterationDurationMS) {
|
|
|
|
mIterationDurationMS = durationMS;
|
|
|
|
} else {
|
2014-09-30 18:35:17 +04:00
|
|
|
mIterationDurationMS = (mIterationDurationMS*3) + durationMS;
|
|
|
|
mIterationDurationMS /= 4;
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
2016-01-21 19:51:36 +03:00
|
|
|
mBuffer.SetBuffer(aOutputBuffer, aFrames);
|
2014-09-30 18:35:17 +04:00
|
|
|
// fill part or all with leftover data from last iteration (since we
|
|
|
|
// align to Audio blocks)
|
2014-08-26 19:01:33 +04:00
|
|
|
mScratchBuffer.Empty(mBuffer);
|
2014-09-30 18:35:17 +04:00
|
|
|
// if we totally filled the buffer (and mScratchBuffer isn't empty),
|
|
|
|
// we don't need to run an iteration and if we do so we may overflow.
|
|
|
|
if (mBuffer.Available()) {
|
2014-08-26 19:01:33 +04:00
|
|
|
|
2014-09-30 18:35:17 +04:00
|
|
|
// State computed time is decided by the audio callback's buffer length. We
|
|
|
|
// compute the iteration start and end from there, trying to keep the amount
|
|
|
|
// of buffering in the graph constant.
|
2015-07-29 08:13:23 +03:00
|
|
|
GraphTime nextStateComputedTime =
|
2015-08-13 07:23:17 +03:00
|
|
|
mGraphImpl->RoundUpToNextAudioBlock(stateComputedTime + mBuffer.Available());
|
2014-09-30 18:35:17 +04:00
|
|
|
|
|
|
|
mIterationStart = mIterationEnd;
|
|
|
|
// inGraph is the number of audio frames there is between the state time and
|
|
|
|
// the current time, i.e. the maximum theoretical length of the interval we
|
|
|
|
// could use as [mIterationStart; mIterationEnd].
|
2015-08-13 07:23:17 +03:00
|
|
|
GraphTime inGraph = stateComputedTime - mIterationStart;
|
2014-09-30 18:35:17 +04:00
|
|
|
// We want the interval [mIterationStart; mIterationEnd] to be before the
|
2015-08-13 07:23:17 +03:00
|
|
|
// interval [stateComputedTime; nextStateComputedTime]. We also want
|
2014-09-30 18:35:17 +04:00
|
|
|
// the distance between these intervals to be roughly equivalent each time, to
|
|
|
|
// ensure there is no clock drift between current time and state time. Since
|
|
|
|
// we can't act on the state time because we have to fill the audio buffer, we
|
|
|
|
// reclock the current time against the state time, here.
|
|
|
|
mIterationEnd = mIterationStart + 0.8 * inGraph;
|
|
|
|
|
2016-01-06 11:20:20 +03:00
|
|
|
STREAM_LOG(LogLevel::Verbose, ("interval[%ld; %ld] state[%ld; %ld] (frames: %ld) (durationMS: %u) (duration ticks: %ld)\n",
|
|
|
|
(long)mIterationStart, (long)mIterationEnd,
|
|
|
|
(long)stateComputedTime, (long)nextStateComputedTime,
|
|
|
|
(long)aFrames, (uint32_t)durationMS,
|
|
|
|
(long)(nextStateComputedTime - stateComputedTime)));
|
2014-09-30 18:35:17 +04:00
|
|
|
|
|
|
|
mCurrentTimeStamp = TimeStamp::Now();
|
|
|
|
|
2015-08-13 07:23:17 +03:00
|
|
|
if (stateComputedTime < mIterationEnd) {
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Warning, ("Media graph global underrun detected"));
|
2015-08-13 07:23:17 +03:00
|
|
|
mIterationEnd = stateComputedTime;
|
2014-09-30 18:35:17 +04:00
|
|
|
}
|
2014-08-26 19:01:33 +04:00
|
|
|
|
2015-08-04 10:42:10 +03:00
|
|
|
stillProcessing = mGraphImpl->OneIteration(nextStateComputedTime);
|
2014-09-30 18:35:17 +04:00
|
|
|
} else {
|
2016-01-21 19:51:36 +03:00
|
|
|
STREAM_LOG(LogLevel::Verbose, ("DataCallback buffer filled entirely from scratch buffer, skipping iteration."));
|
2014-09-30 18:35:17 +04:00
|
|
|
stillProcessing = true;
|
|
|
|
}
|
2014-08-26 19:01:33 +04:00
|
|
|
|
2014-08-26 19:04:38 +04:00
|
|
|
mBuffer.BufferFilled();
|
2014-08-26 19:01:33 +04:00
|
|
|
|
2016-01-21 19:51:35 +03:00
|
|
|
// Callback any observers for the AEC speaker data. Note that one
|
|
|
|
// (maybe) of these will be full-duplex, the others will get their input
|
|
|
|
// data off separate cubeb callbacks. Take care with how stuff is
|
|
|
|
// removed/added to this list and TSAN issues, but input and output will
|
|
|
|
// use separate callback methods.
|
2016-01-21 19:51:36 +03:00
|
|
|
mGraphImpl->NotifyOutputData(aOutputBuffer, static_cast<size_t>(aFrames),
|
2016-02-17 21:19:01 +03:00
|
|
|
mSampleRate, ChannelCount);
|
2016-01-21 19:51:35 +03:00
|
|
|
|
|
|
|
// Process mic data if any/needed -- after inserting far-end data for AEC!
|
|
|
|
if (aInputBuffer) {
|
|
|
|
if (mAudioInput) { // for this specific input-only or full-duplex stream
|
|
|
|
mAudioInput->NotifyInputData(mGraphImpl, aInputBuffer,
|
|
|
|
static_cast<size_t>(aFrames),
|
2016-02-17 21:19:01 +03:00
|
|
|
mSampleRate, ChannelCount);
|
2016-01-21 19:51:35 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-12-01 13:47:31 +03:00
|
|
|
bool switching = false;
|
|
|
|
{
|
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
|
|
|
switching = !!NextDriver();
|
|
|
|
}
|
|
|
|
|
|
|
|
if (switching && stillProcessing) {
|
|
|
|
// If the audio stream has not been started by the previous driver or
|
|
|
|
// the graph itself, keep it alive.
|
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
|
|
|
if (!IsStarted()) {
|
|
|
|
return aFrames;
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("Switching to system driver."));
|
2016-01-21 19:51:36 +03:00
|
|
|
RemoveCallback();
|
2015-12-01 13:47:31 +03:00
|
|
|
NextDriver()->SetGraphTime(this, mIterationStart, mIterationEnd);
|
|
|
|
mGraphImpl->SetCurrentDriver(NextDriver());
|
|
|
|
NextDriver()->Start();
|
2014-08-26 19:01:33 +04:00
|
|
|
// Returning less than aFrames starts the draining and eventually stops the
|
|
|
|
// audio thread. This function will never get called again.
|
|
|
|
return aFrames - 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!stillProcessing) {
|
2014-08-31 16:19:48 +04:00
|
|
|
LIFECYCLE_LOG("Stopping audio thread for MediaStreamGraph %p", this);
|
2014-08-26 19:01:33 +04:00
|
|
|
return aFrames - 1;
|
|
|
|
}
|
|
|
|
return aFrames;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
AudioCallbackDriver::StateCallback(cubeb_state aState)
|
|
|
|
{
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Debug, ("AudioCallbackDriver State: %d", aState));
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
AudioCallbackDriver::MixerCallback(AudioDataValue* aMixedBuffer,
|
|
|
|
AudioSampleFormat aFormat,
|
|
|
|
uint32_t aChannels,
|
|
|
|
uint32_t aFrames,
|
|
|
|
uint32_t aSampleRate)
|
|
|
|
{
|
|
|
|
uint32_t toWrite = mBuffer.Available();
|
|
|
|
|
|
|
|
if (!mBuffer.Available()) {
|
2014-09-30 18:35:17 +04:00
|
|
|
NS_WARNING("DataCallback buffer full, expect frame drops.");
|
2014-08-26 19:01:33 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
MOZ_ASSERT(mBuffer.Available() <= aFrames);
|
|
|
|
|
|
|
|
mBuffer.WriteFrames(aMixedBuffer, mBuffer.Available());
|
|
|
|
MOZ_ASSERT(mBuffer.Available() == 0, "Missing frames to fill audio callback's buffer.");
|
|
|
|
|
|
|
|
DebugOnly<uint32_t> written = mScratchBuffer.Fill(aMixedBuffer + toWrite * aChannels, aFrames - toWrite);
|
|
|
|
NS_WARN_IF_FALSE(written == aFrames - toWrite, "Dropping frames.");
|
|
|
|
};
|
|
|
|
|
2014-08-26 19:02:31 +04:00
|
|
|
void AudioCallbackDriver::PanOutputIfNeeded(bool aMicrophoneActive)
|
|
|
|
{
|
|
|
|
#ifdef XP_MACOSX
|
|
|
|
cubeb_device* out;
|
|
|
|
int rv;
|
|
|
|
char name[128];
|
|
|
|
size_t length = sizeof(name);
|
|
|
|
|
|
|
|
rv = sysctlbyname("hw.model", name, &length, NULL, 0);
|
|
|
|
if (rv) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strncmp(name, "MacBookPro", 10)) {
|
|
|
|
if (cubeb_stream_get_current_device(mAudioStream, &out) == CUBEB_OK) {
|
|
|
|
// Check if we are currently outputing sound on external speakers.
|
|
|
|
if (!strcmp(out->output_name, "ispk")) {
|
|
|
|
// Pan everything to the right speaker.
|
|
|
|
if (aMicrophoneActive) {
|
|
|
|
if (cubeb_stream_set_panning(mAudioStream, 1.0) != CUBEB_OK) {
|
|
|
|
NS_WARNING("Could not pan audio output to the right.");
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (cubeb_stream_set_panning(mAudioStream, 0.0) != CUBEB_OK) {
|
|
|
|
NS_WARNING("Could not pan audio output to the center.");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (cubeb_stream_set_panning(mAudioStream, 0.0) != CUBEB_OK) {
|
|
|
|
NS_WARNING("Could not pan audio output to the center.");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
cubeb_stream_device_destroy(mAudioStream, out);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
AudioCallbackDriver::DeviceChangedCallback() {
|
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
|
|
|
PanOutputIfNeeded(mMicrophoneActive);
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
// On OSX, changing the output device causes the audio thread to no call the
|
|
|
|
// audio callback, so we're unable to process real-time input data, and this
|
|
|
|
// results in latency building up.
|
|
|
|
// We switch to a system driver until audio callbacks are called again, so we
|
|
|
|
// still pull from the input stream, so that everything works apart from the
|
|
|
|
// audio output.
|
|
|
|
#ifdef XP_MACOSX
|
|
|
|
// Don't bother doing the device switching dance if the graph is not RUNNING
|
|
|
|
// (starting up, shutting down), because we haven't started pulling from the
|
|
|
|
// SourceMediaStream.
|
|
|
|
if (!GraphImpl()->Running()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mSelfReference) {
|
|
|
|
return;
|
|
|
|
}
|
2015-06-04 01:25:57 +03:00
|
|
|
STREAM_LOG(LogLevel::Error, ("Switching to SystemClockDriver during output switch"));
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
mSelfReference.Take(this);
|
|
|
|
mCallbackReceivedWhileSwitching = 0;
|
2015-12-01 13:47:31 +03:00
|
|
|
SetNextDriver(new SystemClockDriver(GraphImpl()));
|
2016-01-21 19:51:36 +03:00
|
|
|
RemoveCallback();
|
2015-08-13 07:23:17 +03:00
|
|
|
mNextDriver->SetGraphTime(this, mIterationStart, mIterationEnd);
|
Bug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup
On OSX, when the audio output device changes, the OS will call the audio
callbacks in weird patterns, if at all, during a period of ~1s. If
real-time SourceMediaStreams are present in the MediaStreamGraph, this means
buffering will occur, and the overall latency between the MediaStreamGraph
insertion time, and the actual output time will grow.
To fix this, we detect when the output device changes, and we switch temporarily
to a SystemClockDriver, that will pull from the SourceMediaStream, and simply
discard all input data. Then, when we get audio callbacks called reliably
(basically, when OSX is done switching to the other output), we switch back to
the previous AudioCallbackDriver.
We keep the previous AudioCallbackDriver alive using a self-reference. If an
AudioCallbackDriver has a self-reference, that means it's in a state when a
device is switching, so it's not linked to an MSG per se.
2014-10-22 18:12:29 +04:00
|
|
|
mGraphImpl->SetCurrentDriver(mNextDriver);
|
|
|
|
mNextDriver->Start();
|
|
|
|
#endif
|
2014-08-26 19:02:31 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
AudioCallbackDriver::SetMicrophoneActive(bool aActive)
|
|
|
|
{
|
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
|
|
|
|
|
|
|
mMicrophoneActive = aActive;
|
|
|
|
|
|
|
|
PanOutputIfNeeded(mMicrophoneActive);
|
|
|
|
}
|
2014-08-26 19:01:33 +04:00
|
|
|
|
|
|
|
uint32_t
|
|
|
|
AudioCallbackDriver::IterationDuration()
|
|
|
|
{
|
|
|
|
// The real fix would be to have an API in cubeb to give us the number. Short
|
|
|
|
// of that, we approximate it here. bug 1019507
|
|
|
|
return mIterationDurationMS;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool
|
|
|
|
AudioCallbackDriver::IsStarted() {
|
|
|
|
mGraphImpl->GetMonitor().AssertCurrentThreadOwns();
|
|
|
|
return mStarted;
|
|
|
|
}
|
|
|
|
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
void
|
|
|
|
AudioCallbackDriver::EnqueueStreamAndPromiseForOperation(MediaStream* aStream,
|
|
|
|
void* aPromise,
|
|
|
|
dom::AudioContextOperation aOperation)
|
|
|
|
{
|
|
|
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
|
|
|
mPromisesForOperation.AppendElement(StreamAndPromiseForOperation(aStream,
|
|
|
|
aPromise,
|
|
|
|
aOperation));
|
|
|
|
}
|
|
|
|
|
|
|
|
void AudioCallbackDriver::CompleteAudioContextOperations(AsyncCubebOperation aOperation)
|
|
|
|
{
|
2016-02-02 18:36:30 +03:00
|
|
|
AutoTArray<StreamAndPromiseForOperation, 1> array;
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
|
|
|
|
// We can't lock for the whole function because AudioContextOperationCompleted
|
|
|
|
// will grab the monitor
|
|
|
|
{
|
|
|
|
MonitorAutoLock mon(GraphImpl()->GetMonitor());
|
|
|
|
array.SwapElements(mPromisesForOperation);
|
|
|
|
}
|
|
|
|
|
2015-04-29 12:02:57 +03:00
|
|
|
for (uint32_t i = 0; i < array.Length(); i++) {
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
StreamAndPromiseForOperation& s = array[i];
|
|
|
|
if ((aOperation == AsyncCubebOperation::INIT &&
|
2015-05-10 06:38:15 +03:00
|
|
|
s.mOperation == dom::AudioContextOperation::Resume) ||
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
(aOperation == AsyncCubebOperation::SHUTDOWN &&
|
2015-05-10 06:38:15 +03:00
|
|
|
s.mOperation != dom::AudioContextOperation::Resume)) {
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
|
|
|
|
GraphImpl()->AudioContextOperationCompleted(s.mStream,
|
|
|
|
s.mPromise,
|
|
|
|
s.mOperation);
|
|
|
|
array.RemoveElementAt(i);
|
2015-04-29 12:02:57 +03:00
|
|
|
i--;
|
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text:
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-state
- http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange
- In a couple words, the behavior we want:
- Closed context cannot have new nodes created, but can do decodeAudioData,
and create buffers, and such.
- OfflineAudioContexts don't support those methods, transitions happen at
startRendering and at the end of processing. onstatechange is used to make
this observable.
- (regular) AudioContexts support those methods. The promises and
onstatechange should be resolved/called when the operation has actually
completed on the rendering thread. Once a context has been closed, it
cannot transition back to "running". An AudioContext switches to "running"
when the audio callback start running, this allow authors to know how long
the audio stack takes to start running.
- MediaStreams that feed in/go out of a suspended graph should respectively
not buffer at the graph input, and output silence
- suspended context should not be doing much on the CPU, and we should try
to pause audio streams if we can (this behaviour is the main reason we need
this in the first place, for saving battery on mobile, and CPU on all
platforms)
- Now, the implementation:
- AudioNodeStreams are now tagged with a context id, to be able to operate
on all the streams of a given AudioContext on the Graph thread without
having to go and lock everytime to touch the AudioContext. This happens in
the AudioNodeStream ctor. IDs are of course constant for the lifetime of the
node.
- When an AudioContext goes into suspended mode, streams for this
AudioContext are moved out of the mStreams array to a second array,
mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not
processed.
- The MSG will automatically switch to a SystemClockDriver when it finds
that there are no more AudioNodeStream/Stream with an audio track. This is
how pausing the audio subsystem and saving battery works. Subsequently, when
the MSG finds that there are only streams in mSuspendedStreams, it will go
to sleep (block on a monitor), so we save CPU, but it does not shut itself
down. This is mostly not a new behaviour (this is what the MSG does since
the refactoring), but is important to note.
- Promises are gripped (addref-ed) on the main thread, and then shepherd
down other threads and to the GraphDriver, if needed (sometimes we can
resolve them right away). They move between threads as void* to prevent
calling methods on them, as they are not thread safe. Then, the driver
executes the operation, and when it's done (initializing and closing audio
streams can take some time), we send the promise back to the main thread,
and resolve it, casting back to Promise* after asserting we're back on the
main thread. This way, we can send them back on the main thread once an
operation has complete (suspending an audio stream, starting it again on
resume(), etc.), without having to do bookkeeping between suspend calls and
their result. Promises are not thread safe, so we can't move them around
AddRef-ed.
- The stream destruction logic now takes into account that a stream can be
destroyed while not being in mStreams.
- A graph can now switch GraphDriver twice or more per iteration, for
example if an author goes suspend()/resume()/suspend() in the same script.
- Some operation have to be done on suspended stream, so we now use double
for-loop around mSuspendedStreams and mStreams in some places in
MediaStreamGraph.cpp.
- A tricky part was making sure everything worked at AudioContext
boundaries. TrackUnionStream that have one of their input stream suspended
append null ticks instead.
- The graph ordering algorithm had to be altered to not include suspended
streams.
- There are some edge cases (adding a stream on a suspended graph, calling
suspend/resume when a graph has just been close()d).
2015-02-27 20:22:05 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!array.IsEmpty()) {
|
|
|
|
MonitorAutoLock mon(GraphImpl()->GetMonitor());
|
|
|
|
mPromisesForOperation.AppendElements(array);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-04-25 18:09:30 +04:00
|
|
|
|
2015-07-13 18:25:42 +03:00
|
|
|
} // namespace mozilla
|