зеркало из https://github.com/mozilla/gecko-dev.git
Bug 1094764 - Implement AudioContext.suspend and friends. r=roc,ehsan
- Relevant spec text: - http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise - http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise - http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise - http://webaudio.github.io/web-audio-api/#widl-AudioContext-state - http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange - In a couple words, the behavior we want: - Closed context cannot have new nodes created, but can do decodeAudioData, and create buffers, and such. - OfflineAudioContexts don't support those methods, transitions happen at startRendering and at the end of processing. onstatechange is used to make this observable. - (regular) AudioContexts support those methods. The promises and onstatechange should be resolved/called when the operation has actually completed on the rendering thread. Once a context has been closed, it cannot transition back to "running". An AudioContext switches to "running" when the audio callback start running, this allow authors to know how long the audio stack takes to start running. - MediaStreams that feed in/go out of a suspended graph should respectively not buffer at the graph input, and output silence - suspended context should not be doing much on the CPU, and we should try to pause audio streams if we can (this behaviour is the main reason we need this in the first place, for saving battery on mobile, and CPU on all platforms) - Now, the implementation: - AudioNodeStreams are now tagged with a context id, to be able to operate on all the streams of a given AudioContext on the Graph thread without having to go and lock everytime to touch the AudioContext. This happens in the AudioNodeStream ctor. IDs are of course constant for the lifetime of the node. - When an AudioContext goes into suspended mode, streams for this AudioContext are moved out of the mStreams array to a second array, mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not processed. - The MSG will automatically switch to a SystemClockDriver when it finds that there are no more AudioNodeStream/Stream with an audio track. This is how pausing the audio subsystem and saving battery works. Subsequently, when the MSG finds that there are only streams in mSuspendedStreams, it will go to sleep (block on a monitor), so we save CPU, but it does not shut itself down. This is mostly not a new behaviour (this is what the MSG does since the refactoring), but is important to note. - Promises are gripped (addref-ed) on the main thread, and then shepherd down other threads and to the GraphDriver, if needed (sometimes we can resolve them right away). They move between threads as void* to prevent calling methods on them, as they are not thread safe. Then, the driver executes the operation, and when it's done (initializing and closing audio streams can take some time), we send the promise back to the main thread, and resolve it, casting back to Promise* after asserting we're back on the main thread. This way, we can send them back on the main thread once an operation has complete (suspending an audio stream, starting it again on resume(), etc.), without having to do bookkeeping between suspend calls and their result. Promises are not thread safe, so we can't move them around AddRef-ed. - The stream destruction logic now takes into account that a stream can be destroyed while not being in mStreams. - A graph can now switch GraphDriver twice or more per iteration, for example if an author goes suspend()/resume()/suspend() in the same script. - Some operation have to be done on suspended stream, so we now use double for-loop around mSuspendedStreams and mStreams in some places in MediaStreamGraph.cpp. - A tricky part was making sure everything worked at AudioContext boundaries. TrackUnionStream that have one of their input stream suspended append null ticks instead. - The graph ordering algorithm had to be altered to not include suspended streams. - There are some edge cases (adding a stream on a suspended graph, calling suspend/resume when a graph has just been close()d).
This commit is contained in:
Родитель
747b227596
Коммит
70b6a9e143
|
@ -13057,7 +13057,8 @@ nsGlobalWindow::SuspendTimeouts(uint32_t aIncrease,
|
||||||
|
|
||||||
// Suspend all of the AudioContexts for this window
|
// Suspend all of the AudioContexts for this window
|
||||||
for (uint32_t i = 0; i < mAudioContexts.Length(); ++i) {
|
for (uint32_t i = 0; i < mAudioContexts.Length(); ++i) {
|
||||||
mAudioContexts[i]->Suspend();
|
ErrorResult dummy;
|
||||||
|
nsRefPtr<Promise> d = mAudioContexts[i]->Suspend(dummy);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -13117,7 +13118,8 @@ nsGlobalWindow::ResumeTimeouts(bool aThawChildren)
|
||||||
|
|
||||||
// Resume all of the AudioContexts for this window
|
// Resume all of the AudioContexts for this window
|
||||||
for (uint32_t i = 0; i < mAudioContexts.Length(); ++i) {
|
for (uint32_t i = 0; i < mAudioContexts.Length(); ++i) {
|
||||||
mAudioContexts[i]->Resume();
|
ErrorResult dummy;
|
||||||
|
nsRefPtr<Promise> d = mAudioContexts[i]->Resume(dummy);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Thaw all of the workers for this window.
|
// Thaw all of the workers for this window.
|
||||||
|
|
|
@ -23,7 +23,7 @@ extern PRLogModuleInfo* gMediaStreamGraphLog;
|
||||||
#ifdef ENABLE_LIFECYCLE_LOG
|
#ifdef ENABLE_LIFECYCLE_LOG
|
||||||
#ifdef ANDROID
|
#ifdef ANDROID
|
||||||
#include "android/log.h"
|
#include "android/log.h"
|
||||||
#define LIFECYCLE_LOG(args...) __android_log_print(ANDROID_LOG_INFO, "Gecko - MSG" , ## __VA_ARGS__); printf(__VA_ARGS__);printf("\n");
|
#define LIFECYCLE_LOG(...) __android_log_print(ANDROID_LOG_INFO, "Gecko - MSG" , __VA_ARGS__); printf(__VA_ARGS__);printf("\n");
|
||||||
#else
|
#else
|
||||||
#define LIFECYCLE_LOG(...) printf(__VA_ARGS__);printf("\n");
|
#define LIFECYCLE_LOG(...) printf(__VA_ARGS__);printf("\n");
|
||||||
#endif
|
#endif
|
||||||
|
@ -95,9 +95,6 @@ void GraphDriver::SwitchAtNextIteration(GraphDriver* aNextDriver)
|
||||||
LIFECYCLE_LOG("Switching to new driver: %p (%s)",
|
LIFECYCLE_LOG("Switching to new driver: %p (%s)",
|
||||||
aNextDriver, aNextDriver->AsAudioCallbackDriver() ?
|
aNextDriver, aNextDriver->AsAudioCallbackDriver() ?
|
||||||
"AudioCallbackDriver" : "SystemClockDriver");
|
"AudioCallbackDriver" : "SystemClockDriver");
|
||||||
// Sometimes we switch twice to a new driver per iteration, this is probably a
|
|
||||||
// bug.
|
|
||||||
MOZ_ASSERT(!mNextDriver || mNextDriver->AsAudioCallbackDriver());
|
|
||||||
mNextDriver = aNextDriver;
|
mNextDriver = aNextDriver;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -145,7 +142,7 @@ public:
|
||||||
LIFECYCLE_LOG("Releasing audio driver off main thread.");
|
LIFECYCLE_LOG("Releasing audio driver off main thread.");
|
||||||
nsRefPtr<AsyncCubebTask> releaseEvent =
|
nsRefPtr<AsyncCubebTask> releaseEvent =
|
||||||
new AsyncCubebTask(mDriver->AsAudioCallbackDriver(),
|
new AsyncCubebTask(mDriver->AsAudioCallbackDriver(),
|
||||||
AsyncCubebTask::SHUTDOWN);
|
AsyncCubebOperation::SHUTDOWN);
|
||||||
mDriver = nullptr;
|
mDriver = nullptr;
|
||||||
releaseEvent->Dispatch();
|
releaseEvent->Dispatch();
|
||||||
} else {
|
} else {
|
||||||
|
@ -163,7 +160,7 @@ void GraphDriver::Shutdown()
|
||||||
if (AsAudioCallbackDriver()) {
|
if (AsAudioCallbackDriver()) {
|
||||||
LIFECYCLE_LOG("Releasing audio driver off main thread (GraphDriver::Shutdown).\n");
|
LIFECYCLE_LOG("Releasing audio driver off main thread (GraphDriver::Shutdown).\n");
|
||||||
nsRefPtr<AsyncCubebTask> releaseEvent =
|
nsRefPtr<AsyncCubebTask> releaseEvent =
|
||||||
new AsyncCubebTask(AsAudioCallbackDriver(), AsyncCubebTask::SHUTDOWN);
|
new AsyncCubebTask(AsAudioCallbackDriver(), AsyncCubebOperation::SHUTDOWN);
|
||||||
releaseEvent->Dispatch();
|
releaseEvent->Dispatch();
|
||||||
} else {
|
} else {
|
||||||
Stop();
|
Stop();
|
||||||
|
@ -204,7 +201,7 @@ public:
|
||||||
// because the osx audio stack is currently switching output device.
|
// because the osx audio stack is currently switching output device.
|
||||||
if (!mDriver->mPreviousDriver->AsAudioCallbackDriver()->IsSwitchingDevice()) {
|
if (!mDriver->mPreviousDriver->AsAudioCallbackDriver()->IsSwitchingDevice()) {
|
||||||
nsRefPtr<AsyncCubebTask> releaseEvent =
|
nsRefPtr<AsyncCubebTask> releaseEvent =
|
||||||
new AsyncCubebTask(mDriver->mPreviousDriver->AsAudioCallbackDriver(), AsyncCubebTask::SHUTDOWN);
|
new AsyncCubebTask(mDriver->mPreviousDriver->AsAudioCallbackDriver(), AsyncCubebOperation::SHUTDOWN);
|
||||||
mDriver->mPreviousDriver = nullptr;
|
mDriver->mPreviousDriver = nullptr;
|
||||||
releaseEvent->Dispatch();
|
releaseEvent->Dispatch();
|
||||||
}
|
}
|
||||||
|
@ -505,36 +502,21 @@ AsyncCubebTask::Run()
|
||||||
MOZ_ASSERT(mDriver);
|
MOZ_ASSERT(mDriver);
|
||||||
|
|
||||||
switch(mOperation) {
|
switch(mOperation) {
|
||||||
case AsyncCubebOperation::INIT:
|
case AsyncCubebOperation::INIT: {
|
||||||
LIFECYCLE_LOG("AsyncCubebOperation::INIT\n");
|
LIFECYCLE_LOG("AsyncCubebOperation::INIT\n");
|
||||||
mDriver->Init();
|
mDriver->Init();
|
||||||
|
mDriver->CompleteAudioContextOperations(mOperation);
|
||||||
break;
|
break;
|
||||||
case AsyncCubebOperation::SHUTDOWN:
|
}
|
||||||
|
case AsyncCubebOperation::SHUTDOWN: {
|
||||||
LIFECYCLE_LOG("AsyncCubebOperation::SHUTDOWN\n");
|
LIFECYCLE_LOG("AsyncCubebOperation::SHUTDOWN\n");
|
||||||
mDriver->Stop();
|
mDriver->Stop();
|
||||||
|
|
||||||
|
mDriver->CompleteAudioContextOperations(mOperation);
|
||||||
|
|
||||||
mDriver = nullptr;
|
mDriver = nullptr;
|
||||||
mShutdownGrip = nullptr;
|
mShutdownGrip = nullptr;
|
||||||
break;
|
break;
|
||||||
case AsyncCubebOperation::SLEEP: {
|
|
||||||
{
|
|
||||||
LIFECYCLE_LOG("AsyncCubebOperation::SLEEP\n");
|
|
||||||
MonitorAutoLock mon(mDriver->mGraphImpl->GetMonitor());
|
|
||||||
// We might just have been awoken
|
|
||||||
if (mDriver->mGraphImpl->mNeedAnotherIteration) {
|
|
||||||
mDriver->mPauseRequested = false;
|
|
||||||
mDriver->mWaitState = AudioCallbackDriver::WAITSTATE_RUNNING;
|
|
||||||
mDriver->mGraphImpl->mGraphDriverAsleep = false ; // atomic
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
mDriver->Stop();
|
|
||||||
mDriver->mGraphImpl->mGraphDriverAsleep = true; // atomic
|
|
||||||
mDriver->mWaitState = AudioCallbackDriver::WAITSTATE_WAITING_INDEFINITELY;
|
|
||||||
mDriver->mPauseRequested = false;
|
|
||||||
mDriver->mGraphImpl->GetMonitor().Wait(PR_INTERVAL_NO_TIMEOUT);
|
|
||||||
}
|
|
||||||
STREAM_LOG(PR_LOG_DEBUG, ("Restarting audio stream from sleep."));
|
|
||||||
mDriver->StartStream();
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
MOZ_CRASH("Operation not implemented.");
|
MOZ_CRASH("Operation not implemented.");
|
||||||
|
@ -546,6 +528,16 @@ AsyncCubebTask::Run()
|
||||||
return NS_OK;
|
return NS_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
StreamAndPromiseForOperation::StreamAndPromiseForOperation(MediaStream* aStream,
|
||||||
|
void* aPromise,
|
||||||
|
dom::AudioContextOperation aOperation)
|
||||||
|
: mStream(aStream)
|
||||||
|
, mPromise(aPromise)
|
||||||
|
, mOperation(aOperation)
|
||||||
|
{
|
||||||
|
// MOZ_ASSERT(aPromise);
|
||||||
|
}
|
||||||
|
|
||||||
AudioCallbackDriver::AudioCallbackDriver(MediaStreamGraphImpl* aGraphImpl, dom::AudioChannel aChannel)
|
AudioCallbackDriver::AudioCallbackDriver(MediaStreamGraphImpl* aGraphImpl, dom::AudioChannel aChannel)
|
||||||
: GraphDriver(aGraphImpl)
|
: GraphDriver(aGraphImpl)
|
||||||
, mIterationDurationMS(MEDIA_GRAPH_TARGET_PERIOD_MS)
|
, mIterationDurationMS(MEDIA_GRAPH_TARGET_PERIOD_MS)
|
||||||
|
@ -561,7 +553,9 @@ AudioCallbackDriver::AudioCallbackDriver(MediaStreamGraphImpl* aGraphImpl, dom::
|
||||||
}
|
}
|
||||||
|
|
||||||
AudioCallbackDriver::~AudioCallbackDriver()
|
AudioCallbackDriver::~AudioCallbackDriver()
|
||||||
{}
|
{
|
||||||
|
MOZ_ASSERT(mPromisesForOperation.IsEmpty());
|
||||||
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
AudioCallbackDriver::Init()
|
AudioCallbackDriver::Init()
|
||||||
|
@ -651,12 +645,18 @@ AudioCallbackDriver::Start()
|
||||||
if (NS_IsMainThread()) {
|
if (NS_IsMainThread()) {
|
||||||
STREAM_LOG(PR_LOG_DEBUG, ("Starting audio threads for MediaStreamGraph %p from a new thread.", mGraphImpl));
|
STREAM_LOG(PR_LOG_DEBUG, ("Starting audio threads for MediaStreamGraph %p from a new thread.", mGraphImpl));
|
||||||
nsRefPtr<AsyncCubebTask> initEvent =
|
nsRefPtr<AsyncCubebTask> initEvent =
|
||||||
new AsyncCubebTask(this, AsyncCubebTask::INIT);
|
new AsyncCubebTask(this, AsyncCubebOperation::INIT);
|
||||||
initEvent->Dispatch();
|
initEvent->Dispatch();
|
||||||
} else {
|
} else {
|
||||||
STREAM_LOG(PR_LOG_DEBUG, ("Starting audio threads for MediaStreamGraph %p from the previous driver's thread", mGraphImpl));
|
STREAM_LOG(PR_LOG_DEBUG, ("Starting audio threads for MediaStreamGraph %p from the previous driver's thread", mGraphImpl));
|
||||||
Init();
|
Init();
|
||||||
|
|
||||||
|
// Check if we need to resolve promises because the driver just got switched
|
||||||
|
// because of a resuming AudioContext
|
||||||
|
if (!mPromisesForOperation.IsEmpty()) {
|
||||||
|
CompleteAudioContextOperations(AsyncCubebOperation::INIT);
|
||||||
|
}
|
||||||
|
|
||||||
if (mPreviousDriver) {
|
if (mPreviousDriver) {
|
||||||
nsCOMPtr<nsIRunnable> event =
|
nsCOMPtr<nsIRunnable> event =
|
||||||
new MediaStreamGraphShutdownThreadRunnable(mPreviousDriver);
|
new MediaStreamGraphShutdownThreadRunnable(mPreviousDriver);
|
||||||
|
@ -704,7 +704,7 @@ AudioCallbackDriver::Revive()
|
||||||
} else {
|
} else {
|
||||||
STREAM_LOG(PR_LOG_DEBUG, ("Starting audio threads for MediaStreamGraph %p from a new thread.", mGraphImpl));
|
STREAM_LOG(PR_LOG_DEBUG, ("Starting audio threads for MediaStreamGraph %p from a new thread.", mGraphImpl));
|
||||||
nsRefPtr<AsyncCubebTask> initEvent =
|
nsRefPtr<AsyncCubebTask> initEvent =
|
||||||
new AsyncCubebTask(this, AsyncCubebTask::INIT);
|
new AsyncCubebTask(this, AsyncCubebOperation::INIT);
|
||||||
initEvent->Dispatch();
|
initEvent->Dispatch();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -729,20 +729,6 @@ AudioCallbackDriver::GetCurrentTime()
|
||||||
|
|
||||||
void AudioCallbackDriver::WaitForNextIteration()
|
void AudioCallbackDriver::WaitForNextIteration()
|
||||||
{
|
{
|
||||||
#if 0
|
|
||||||
mGraphImpl->GetMonitor().AssertCurrentThreadOwns();
|
|
||||||
|
|
||||||
// We can't block on the monitor in the audio callback, so we kick off a new
|
|
||||||
// thread that will pause the audio stream, and restart it when unblocked.
|
|
||||||
// We don't want to sleep when we haven't started the driver yet.
|
|
||||||
if (!mGraphImpl->mNeedAnotherIteration && mAudioStream && mGraphImpl->Running()) {
|
|
||||||
STREAM_LOG(PR_LOG_DEBUG+1, ("AudioCallbackDriver going to sleep"));
|
|
||||||
mPauseRequested = true;
|
|
||||||
nsRefPtr<AsyncCubebTask> sleepEvent =
|
|
||||||
new AsyncCubebTask(this, AsyncCubebTask::SLEEP);
|
|
||||||
sleepEvent->Dispatch();
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
|
@ -1074,5 +1060,47 @@ AudioCallbackDriver::IsStarted() {
|
||||||
return mStarted;
|
return mStarted;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
AudioCallbackDriver::EnqueueStreamAndPromiseForOperation(MediaStream* aStream,
|
||||||
|
void* aPromise,
|
||||||
|
dom::AudioContextOperation aOperation)
|
||||||
|
{
|
||||||
|
MonitorAutoLock mon(mGraphImpl->GetMonitor());
|
||||||
|
mPromisesForOperation.AppendElement(StreamAndPromiseForOperation(aStream,
|
||||||
|
aPromise,
|
||||||
|
aOperation));
|
||||||
|
}
|
||||||
|
|
||||||
|
void AudioCallbackDriver::CompleteAudioContextOperations(AsyncCubebOperation aOperation)
|
||||||
|
{
|
||||||
|
nsAutoTArray<StreamAndPromiseForOperation, 1> array;
|
||||||
|
|
||||||
|
// We can't lock for the whole function because AudioContextOperationCompleted
|
||||||
|
// will grab the monitor
|
||||||
|
{
|
||||||
|
MonitorAutoLock mon(GraphImpl()->GetMonitor());
|
||||||
|
array.SwapElements(mPromisesForOperation);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int32_t i = array.Length() - 1; i >= 0; i--) {
|
||||||
|
StreamAndPromiseForOperation& s = array[i];
|
||||||
|
if ((aOperation == AsyncCubebOperation::INIT &&
|
||||||
|
s.mOperation == AudioContextOperation::Resume) ||
|
||||||
|
(aOperation == AsyncCubebOperation::SHUTDOWN &&
|
||||||
|
s.mOperation != AudioContextOperation::Resume)) {
|
||||||
|
|
||||||
|
GraphImpl()->AudioContextOperationCompleted(s.mStream,
|
||||||
|
s.mPromise,
|
||||||
|
s.mOperation);
|
||||||
|
array.RemoveElementAt(i);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!array.IsEmpty()) {
|
||||||
|
MonitorAutoLock mon(GraphImpl()->GetMonitor());
|
||||||
|
mPromisesForOperation.AppendElements(array);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
} // namepace mozilla
|
} // namepace mozilla
|
||||||
|
|
|
@ -13,6 +13,7 @@
|
||||||
#include "AudioSegment.h"
|
#include "AudioSegment.h"
|
||||||
#include "SelfRef.h"
|
#include "SelfRef.h"
|
||||||
#include "mozilla/Atomics.h"
|
#include "mozilla/Atomics.h"
|
||||||
|
#include "AudioContext.h"
|
||||||
|
|
||||||
struct cubeb_stream;
|
struct cubeb_stream;
|
||||||
|
|
||||||
|
@ -321,6 +322,21 @@ private:
|
||||||
GraphTime mSlice;
|
GraphTime mSlice;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct StreamAndPromiseForOperation
|
||||||
|
{
|
||||||
|
StreamAndPromiseForOperation(MediaStream* aStream,
|
||||||
|
void* aPromise,
|
||||||
|
dom::AudioContextOperation aOperation);
|
||||||
|
nsRefPtr<MediaStream> mStream;
|
||||||
|
void* mPromise;
|
||||||
|
dom::AudioContextOperation mOperation;
|
||||||
|
};
|
||||||
|
|
||||||
|
enum AsyncCubebOperation {
|
||||||
|
INIT,
|
||||||
|
SHUTDOWN
|
||||||
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* This is a graph driver that is based on callback functions called by the
|
* This is a graph driver that is based on callback functions called by the
|
||||||
* audio api. This ensures minimal audio latency, because it means there is no
|
* audio api. This ensures minimal audio latency, because it means there is no
|
||||||
|
@ -392,6 +408,12 @@ public:
|
||||||
return this;
|
return this;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Enqueue a promise that is going to be resolved when a specific operation
|
||||||
|
* occurs on the cubeb stream. */
|
||||||
|
void EnqueueStreamAndPromiseForOperation(MediaStream* aStream,
|
||||||
|
void* aPromise,
|
||||||
|
dom::AudioContextOperation aOperation);
|
||||||
|
|
||||||
bool IsSwitchingDevice() {
|
bool IsSwitchingDevice() {
|
||||||
#ifdef XP_MACOSX
|
#ifdef XP_MACOSX
|
||||||
return mSelfReference;
|
return mSelfReference;
|
||||||
|
@ -414,6 +436,8 @@ public:
|
||||||
/* Tell the driver whether this process is using a microphone or not. This is
|
/* Tell the driver whether this process is using a microphone or not. This is
|
||||||
* thread safe. */
|
* thread safe. */
|
||||||
void SetMicrophoneActive(bool aActive);
|
void SetMicrophoneActive(bool aActive);
|
||||||
|
|
||||||
|
void CompleteAudioContextOperations(AsyncCubebOperation aOperation);
|
||||||
private:
|
private:
|
||||||
/**
|
/**
|
||||||
* On certain MacBookPro, the microphone is located near the left speaker.
|
* On certain MacBookPro, the microphone is located near the left speaker.
|
||||||
|
@ -471,6 +495,7 @@ private:
|
||||||
/* Thread for off-main-thread initialization and
|
/* Thread for off-main-thread initialization and
|
||||||
* shutdown of the audio stream. */
|
* shutdown of the audio stream. */
|
||||||
nsCOMPtr<nsIThread> mInitShutdownThread;
|
nsCOMPtr<nsIThread> mInitShutdownThread;
|
||||||
|
nsAutoTArray<StreamAndPromiseForOperation, 1> mPromisesForOperation;
|
||||||
dom::AudioChannel mAudioChannel;
|
dom::AudioChannel mAudioChannel;
|
||||||
Atomic<bool> mInCallback;
|
Atomic<bool> mInCallback;
|
||||||
/* A thread has been created to be able to pause and restart the audio thread,
|
/* A thread has been created to be able to pause and restart the audio thread,
|
||||||
|
@ -498,12 +523,6 @@ private:
|
||||||
class AsyncCubebTask : public nsRunnable
|
class AsyncCubebTask : public nsRunnable
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
enum AsyncCubebOperation {
|
|
||||||
INIT,
|
|
||||||
SHUTDOWN,
|
|
||||||
SLEEP
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
AsyncCubebTask(AudioCallbackDriver* aDriver, AsyncCubebOperation aOperation);
|
AsyncCubebTask(AudioCallbackDriver* aDriver, AsyncCubebOperation aOperation);
|
||||||
|
|
||||||
|
|
|
@ -24,6 +24,7 @@
|
||||||
#include "AudioNodeEngine.h"
|
#include "AudioNodeEngine.h"
|
||||||
#include "AudioNodeStream.h"
|
#include "AudioNodeStream.h"
|
||||||
#include "AudioNodeExternalInputStream.h"
|
#include "AudioNodeExternalInputStream.h"
|
||||||
|
#include "mozilla/dom/AudioContextBinding.h"
|
||||||
#include <algorithm>
|
#include <algorithm>
|
||||||
#include "DOMMediaStream.h"
|
#include "DOMMediaStream.h"
|
||||||
#include "GeckoProfiler.h"
|
#include "GeckoProfiler.h"
|
||||||
|
@ -102,12 +103,31 @@ MediaStreamGraphImpl::FinishStream(MediaStream* aStream)
|
||||||
SetStreamOrderDirty();
|
SetStreamOrderDirty();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static const GraphTime START_TIME_DELAYED = -1;
|
||||||
|
|
||||||
void
|
void
|
||||||
MediaStreamGraphImpl::AddStream(MediaStream* aStream)
|
MediaStreamGraphImpl::AddStream(MediaStream* aStream)
|
||||||
{
|
{
|
||||||
|
// Check if we're adding a stream to a suspended context, in which case, we
|
||||||
|
// add it to mSuspendedStreams, and delay setting mBufferStartTime
|
||||||
|
bool contextSuspended = false;
|
||||||
|
if (aStream->AsAudioNodeStream()) {
|
||||||
|
for (uint32_t i = 0; i < mSuspendedStreams.Length(); i++) {
|
||||||
|
if (aStream->AudioContextId() == mSuspendedStreams[i]->AudioContextId()) {
|
||||||
|
contextSuspended = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (contextSuspended) {
|
||||||
|
aStream->mBufferStartTime = START_TIME_DELAYED;
|
||||||
|
mSuspendedStreams.AppendElement(aStream);
|
||||||
|
STREAM_LOG(PR_LOG_DEBUG, ("Adding media stream %p to the graph, in the suspended stream array", aStream));
|
||||||
|
} else {
|
||||||
aStream->mBufferStartTime = IterationEnd();
|
aStream->mBufferStartTime = IterationEnd();
|
||||||
mStreams.AppendElement(aStream);
|
mStreams.AppendElement(aStream);
|
||||||
STREAM_LOG(PR_LOG_DEBUG, ("Adding media stream %p to the graph", aStream));
|
STREAM_LOG(PR_LOG_DEBUG, ("Adding media stream %p to the graph", aStream));
|
||||||
|
}
|
||||||
|
|
||||||
SetStreamOrderDirty();
|
SetStreamOrderDirty();
|
||||||
}
|
}
|
||||||
|
@ -131,6 +151,8 @@ MediaStreamGraphImpl::RemoveStream(MediaStream* aStream)
|
||||||
SetStreamOrderDirty();
|
SetStreamOrderDirty();
|
||||||
|
|
||||||
mStreams.RemoveElement(aStream);
|
mStreams.RemoveElement(aStream);
|
||||||
|
mSuspendedStreams.RemoveElement(aStream);
|
||||||
|
|
||||||
NS_RELEASE(aStream); // probably destroying it
|
NS_RELEASE(aStream); // probably destroying it
|
||||||
|
|
||||||
STREAM_LOG(PR_LOG_DEBUG, ("Removing media stream %p from the graph", aStream));
|
STREAM_LOG(PR_LOG_DEBUG, ("Removing media stream %p from the graph", aStream));
|
||||||
|
@ -380,15 +402,23 @@ MediaStreamGraphImpl::UpdateCurrentTimeForStreams(GraphTime aPrevCurrentTime, Gr
|
||||||
{
|
{
|
||||||
nsTArray<MediaStream*> streamsReadyToFinish;
|
nsTArray<MediaStream*> streamsReadyToFinish;
|
||||||
nsAutoTArray<bool,800> streamHasOutput;
|
nsAutoTArray<bool,800> streamHasOutput;
|
||||||
|
|
||||||
|
nsTArray<MediaStream*>* runningAndSuspendedPair[2];
|
||||||
|
runningAndSuspendedPair[0] = &mStreams;
|
||||||
|
runningAndSuspendedPair[1] = &mSuspendedStreams;
|
||||||
|
|
||||||
streamHasOutput.SetLength(mStreams.Length());
|
streamHasOutput.SetLength(mStreams.Length());
|
||||||
for (uint32_t i = 0; i < mStreams.Length(); ++i) {
|
|
||||||
MediaStream* stream = mStreams[i];
|
for (uint32_t array = 0; array < 2; array++) {
|
||||||
|
for (uint32_t i = 0; i < runningAndSuspendedPair[array]->Length(); ++i) {
|
||||||
|
MediaStream* stream = (*runningAndSuspendedPair[array])[i];
|
||||||
|
|
||||||
// Calculate blocked time and fire Blocked/Unblocked events
|
// Calculate blocked time and fire Blocked/Unblocked events
|
||||||
GraphTime blockedTime = 0;
|
GraphTime blockedTime = 0;
|
||||||
GraphTime t = aPrevCurrentTime;
|
GraphTime t = aPrevCurrentTime;
|
||||||
// include |nextCurrentTime| to ensure NotifyBlockingChanged() is called
|
// include |nextCurrentTime| to ensure NotifyBlockingChanged() is called
|
||||||
// before NotifyEvent(this, EVENT_FINISHED) when |nextCurrentTime == stream end time|
|
// before NotifyEvent(this, EVENT_FINISHED) when |nextCurrentTime ==
|
||||||
|
// stream end time|
|
||||||
while (t <= aNextCurrentTime) {
|
while (t <= aNextCurrentTime) {
|
||||||
GraphTime end;
|
GraphTime end;
|
||||||
bool blocked = stream->mBlocked.GetAt(t, &end);
|
bool blocked = stream->mBlocked.GetAt(t, &end);
|
||||||
|
@ -398,32 +428,39 @@ MediaStreamGraphImpl::UpdateCurrentTimeForStreams(GraphTime aPrevCurrentTime, Gr
|
||||||
if (blocked != stream->mNotifiedBlocked) {
|
if (blocked != stream->mNotifiedBlocked) {
|
||||||
for (uint32_t j = 0; j < stream->mListeners.Length(); ++j) {
|
for (uint32_t j = 0; j < stream->mListeners.Length(); ++j) {
|
||||||
MediaStreamListener* l = stream->mListeners[j];
|
MediaStreamListener* l = stream->mListeners[j];
|
||||||
l->NotifyBlockingChanged(this,
|
l->NotifyBlockingChanged(this, blocked
|
||||||
blocked ? MediaStreamListener::BLOCKED : MediaStreamListener::UNBLOCKED);
|
? MediaStreamListener::BLOCKED
|
||||||
|
: MediaStreamListener::UNBLOCKED);
|
||||||
}
|
}
|
||||||
stream->mNotifiedBlocked = blocked;
|
stream->mNotifiedBlocked = blocked;
|
||||||
}
|
}
|
||||||
t = end;
|
t = end;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
stream->AdvanceTimeVaryingValuesToCurrentTime(aNextCurrentTime,
|
||||||
stream->AdvanceTimeVaryingValuesToCurrentTime(aNextCurrentTime, blockedTime);
|
blockedTime);
|
||||||
// Advance mBlocked last so that implementations of
|
// Advance mBlocked last so that implementations of
|
||||||
// AdvanceTimeVaryingValuesToCurrentTime can rely on the value of mBlocked.
|
// AdvanceTimeVaryingValuesToCurrentTime can rely on the value of
|
||||||
|
// mBlocked.
|
||||||
stream->mBlocked.AdvanceCurrentTime(aNextCurrentTime);
|
stream->mBlocked.AdvanceCurrentTime(aNextCurrentTime);
|
||||||
|
|
||||||
|
if (runningAndSuspendedPair[array] == &mStreams) {
|
||||||
streamHasOutput[i] = blockedTime < aNextCurrentTime - aPrevCurrentTime;
|
streamHasOutput[i] = blockedTime < aNextCurrentTime - aPrevCurrentTime;
|
||||||
// Make this an assertion when bug 957832 is fixed.
|
// Make this an assertion when bug 957832 is fixed.
|
||||||
NS_WARN_IF_FALSE(!streamHasOutput[i] || !stream->mNotifiedFinished,
|
NS_WARN_IF_FALSE(
|
||||||
|
!streamHasOutput[i] || !stream->mNotifiedFinished,
|
||||||
"Shouldn't have already notified of finish *and* have output!");
|
"Shouldn't have already notified of finish *and* have output!");
|
||||||
|
|
||||||
if (stream->mFinished && !stream->mNotifiedFinished) {
|
if (stream->mFinished && !stream->mNotifiedFinished) {
|
||||||
streamsReadyToFinish.AppendElement(stream);
|
streamsReadyToFinish.AppendElement(stream);
|
||||||
}
|
}
|
||||||
STREAM_LOG(PR_LOG_DEBUG+1, ("MediaStream %p bufferStartTime=%f blockedTime=%f",
|
}
|
||||||
stream, MediaTimeToSeconds(stream->mBufferStartTime),
|
STREAM_LOG(PR_LOG_DEBUG + 1,
|
||||||
|
("MediaStream %p bufferStartTime=%f blockedTime=%f", stream,
|
||||||
|
MediaTimeToSeconds(stream->mBufferStartTime),
|
||||||
MediaTimeToSeconds(blockedTime)));
|
MediaTimeToSeconds(blockedTime)));
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
for (uint32_t i = 0; i < streamHasOutput.Length(); ++i) {
|
for (uint32_t i = 0; i < streamHasOutput.Length(); ++i) {
|
||||||
|
@ -520,6 +557,21 @@ MediaStreamGraphImpl::MarkConsumed(MediaStream* aStream)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool
|
||||||
|
MediaStreamGraphImpl::StreamSuspended(MediaStream* aStream)
|
||||||
|
{
|
||||||
|
// Only AudioNodeStreams can be suspended, so we can shortcut here.
|
||||||
|
return aStream->AsAudioNodeStream() &&
|
||||||
|
mSuspendedStreams.IndexOf(aStream) != mSuspendedStreams.NoIndex;
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace {
|
||||||
|
// Value of mCycleMarker for unvisited streams in cycle detection.
|
||||||
|
const uint32_t NOT_VISITED = UINT32_MAX;
|
||||||
|
// Value of mCycleMarker for ordered streams in muted cycles.
|
||||||
|
const uint32_t IN_MUTED_CYCLE = 1;
|
||||||
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
MediaStreamGraphImpl::UpdateStreamOrder()
|
MediaStreamGraphImpl::UpdateStreamOrder()
|
||||||
{
|
{
|
||||||
|
@ -527,11 +579,6 @@ MediaStreamGraphImpl::UpdateStreamOrder()
|
||||||
bool shouldAEC = false;
|
bool shouldAEC = false;
|
||||||
#endif
|
#endif
|
||||||
bool audioTrackPresent = false;
|
bool audioTrackPresent = false;
|
||||||
// Value of mCycleMarker for unvisited streams in cycle detection.
|
|
||||||
const uint32_t NOT_VISITED = UINT32_MAX;
|
|
||||||
// Value of mCycleMarker for ordered streams in muted cycles.
|
|
||||||
const uint32_t IN_MUTED_CYCLE = 1;
|
|
||||||
|
|
||||||
for (uint32_t i = 0; i < mStreams.Length(); ++i) {
|
for (uint32_t i = 0; i < mStreams.Length(); ++i) {
|
||||||
MediaStream* stream = mStreams[i];
|
MediaStream* stream = mStreams[i];
|
||||||
stream->mIsConsumed = false;
|
stream->mIsConsumed = false;
|
||||||
|
@ -647,12 +694,19 @@ MediaStreamGraphImpl::UpdateStreamOrder()
|
||||||
// Not-visited input streams should be processed first.
|
// Not-visited input streams should be processed first.
|
||||||
// SourceMediaStreams have already been ordered.
|
// SourceMediaStreams have already been ordered.
|
||||||
for (uint32_t i = inputs.Length(); i--; ) {
|
for (uint32_t i = inputs.Length(); i--; ) {
|
||||||
|
if (StreamSuspended(inputs[i]->mSource)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
auto input = inputs[i]->mSource->AsProcessedStream();
|
auto input = inputs[i]->mSource->AsProcessedStream();
|
||||||
if (input && input->mCycleMarker == NOT_VISITED) {
|
if (input && input->mCycleMarker == NOT_VISITED) {
|
||||||
|
// It can be that this stream has an input which is from a suspended
|
||||||
|
// AudioContext.
|
||||||
|
if (input->isInList()) {
|
||||||
input->remove();
|
input->remove();
|
||||||
dfsStack.insertFront(input);
|
dfsStack.insertFront(input);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -666,6 +720,9 @@ MediaStreamGraphImpl::UpdateStreamOrder()
|
||||||
// unless it is part of the cycle.
|
// unless it is part of the cycle.
|
||||||
uint32_t cycleStackMarker = 0;
|
uint32_t cycleStackMarker = 0;
|
||||||
for (uint32_t i = inputs.Length(); i--; ) {
|
for (uint32_t i = inputs.Length(); i--; ) {
|
||||||
|
if (StreamSuspended(inputs[i]->mSource)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
auto input = inputs[i]->mSource->AsProcessedStream();
|
auto input = inputs[i]->mSource->AsProcessedStream();
|
||||||
if (input) {
|
if (input) {
|
||||||
cycleStackMarker = std::max(cycleStackMarker, input->mCycleMarker);
|
cycleStackMarker = std::max(cycleStackMarker, input->mCycleMarker);
|
||||||
|
@ -761,12 +818,18 @@ MediaStreamGraphImpl::RecomputeBlocking(GraphTime aEndBlockingDecisions)
|
||||||
|
|
||||||
STREAM_LOG(PR_LOG_DEBUG+1, ("Media graph %p computing blocking for time %f",
|
STREAM_LOG(PR_LOG_DEBUG+1, ("Media graph %p computing blocking for time %f",
|
||||||
this, MediaTimeToSeconds(CurrentDriver()->StateComputedTime())));
|
this, MediaTimeToSeconds(CurrentDriver()->StateComputedTime())));
|
||||||
for (uint32_t i = 0; i < mStreams.Length(); ++i) {
|
nsTArray<MediaStream*>* runningAndSuspendedPair[2];
|
||||||
MediaStream* stream = mStreams[i];
|
runningAndSuspendedPair[0] = &mStreams;
|
||||||
|
runningAndSuspendedPair[1] = &mSuspendedStreams;
|
||||||
|
|
||||||
|
for (uint32_t array = 0; array < 2; array++) {
|
||||||
|
for (uint32_t i = 0; i < (*runningAndSuspendedPair[array]).Length(); ++i) {
|
||||||
|
MediaStream* stream = (*runningAndSuspendedPair[array])[i];
|
||||||
if (!stream->mInBlockingSet) {
|
if (!stream->mInBlockingSet) {
|
||||||
// Compute a partition of the streams containing 'stream' such that we can
|
// Compute a partition of the streams containing 'stream' such that we
|
||||||
|
// can
|
||||||
// compute the blocking status of each subset independently.
|
// compute the blocking status of each subset independently.
|
||||||
nsAutoTArray<MediaStream*,10> streamSet;
|
nsAutoTArray<MediaStream*, 10> streamSet;
|
||||||
AddBlockingRelatedStreamsToSet(&streamSet, stream);
|
AddBlockingRelatedStreamsToSet(&streamSet, stream);
|
||||||
|
|
||||||
GraphTime end;
|
GraphTime end;
|
||||||
|
@ -786,6 +849,7 @@ MediaStreamGraphImpl::RecomputeBlocking(GraphTime aEndBlockingDecisions)
|
||||||
blockingDecisionsWillChange = true;
|
blockingDecisionsWillChange = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
STREAM_LOG(PR_LOG_DEBUG+1, ("Media graph %p computed blocking for interval %f to %f",
|
STREAM_LOG(PR_LOG_DEBUG+1, ("Media graph %p computed blocking for interval %f to %f",
|
||||||
this, MediaTimeToSeconds(CurrentDriver()->StateComputedTime()),
|
this, MediaTimeToSeconds(CurrentDriver()->StateComputedTime()),
|
||||||
MediaTimeToSeconds(aEndBlockingDecisions)));
|
MediaTimeToSeconds(aEndBlockingDecisions)));
|
||||||
|
@ -998,14 +1062,6 @@ MediaStreamGraphImpl::PlayAudio(MediaStream* aStream,
|
||||||
// sample. One sample may be played twice, but this should not happen
|
// sample. One sample may be played twice, but this should not happen
|
||||||
// again during an unblocked sequence of track samples.
|
// again during an unblocked sequence of track samples.
|
||||||
StreamTime offset = GraphTimeToStreamTime(aStream, aFrom);
|
StreamTime offset = GraphTimeToStreamTime(aStream, aFrom);
|
||||||
if (audioOutput.mLastTickWritten &&
|
|
||||||
audioOutput.mLastTickWritten != offset) {
|
|
||||||
// If there is a global underrun of the MSG, this property won't hold, and
|
|
||||||
// we reset the sample count tracking.
|
|
||||||
if (offset - audioOutput.mLastTickWritten == 1) {
|
|
||||||
offset = audioOutput.mLastTickWritten;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// We don't update aStream->mBufferStartTime here to account for time spent
|
// We don't update aStream->mBufferStartTime here to account for time spent
|
||||||
// blocked. Instead, we'll update it in UpdateCurrentTimeForStreams after
|
// blocked. Instead, we'll update it in UpdateCurrentTimeForStreams after
|
||||||
|
@ -1037,11 +1093,13 @@ MediaStreamGraphImpl::PlayAudio(MediaStream* aStream,
|
||||||
} else {
|
} else {
|
||||||
StreamTime endTicksNeeded = offset + toWrite;
|
StreamTime endTicksNeeded = offset + toWrite;
|
||||||
StreamTime endTicksAvailable = audio->GetDuration();
|
StreamTime endTicksAvailable = audio->GetDuration();
|
||||||
STREAM_LOG(PR_LOG_DEBUG+1, ("MediaStream %p writing %ld samples for %f to %f (samples %ld to %ld)\n",
|
|
||||||
aStream, toWrite, MediaTimeToSeconds(t), MediaTimeToSeconds(end),
|
|
||||||
offset, endTicksNeeded));
|
|
||||||
|
|
||||||
if (endTicksNeeded <= endTicksAvailable) {
|
if (endTicksNeeded <= endTicksAvailable) {
|
||||||
|
STREAM_LOG(PR_LOG_DEBUG + 1,
|
||||||
|
("MediaStream %p writing %ld samples for %f to %f "
|
||||||
|
"(samples %ld to %ld)\n",
|
||||||
|
aStream, toWrite, MediaTimeToSeconds(t),
|
||||||
|
MediaTimeToSeconds(end), offset, endTicksNeeded));
|
||||||
output.AppendSlice(*audio, offset, endTicksNeeded);
|
output.AppendSlice(*audio, offset, endTicksNeeded);
|
||||||
ticksWritten += toWrite;
|
ticksWritten += toWrite;
|
||||||
offset = endTicksNeeded;
|
offset = endTicksNeeded;
|
||||||
|
@ -1052,12 +1110,22 @@ MediaStreamGraphImpl::PlayAudio(MediaStream* aStream,
|
||||||
if (endTicksNeeded > endTicksAvailable &&
|
if (endTicksNeeded > endTicksAvailable &&
|
||||||
offset < endTicksAvailable) {
|
offset < endTicksAvailable) {
|
||||||
output.AppendSlice(*audio, offset, endTicksAvailable);
|
output.AppendSlice(*audio, offset, endTicksAvailable);
|
||||||
|
STREAM_LOG(PR_LOG_DEBUG + 1,
|
||||||
|
("MediaStream %p writing %ld samples for %f to %f "
|
||||||
|
"(samples %ld to %ld)\n",
|
||||||
|
aStream, toWrite, MediaTimeToSeconds(t),
|
||||||
|
MediaTimeToSeconds(end), offset, endTicksNeeded));
|
||||||
uint32_t available = endTicksAvailable - offset;
|
uint32_t available = endTicksAvailable - offset;
|
||||||
ticksWritten += available;
|
ticksWritten += available;
|
||||||
toWrite -= available;
|
toWrite -= available;
|
||||||
offset = endTicksAvailable;
|
offset = endTicksAvailable;
|
||||||
}
|
}
|
||||||
output.AppendNullData(toWrite);
|
output.AppendNullData(toWrite);
|
||||||
|
STREAM_LOG(PR_LOG_DEBUG + 1,
|
||||||
|
("MediaStream %p writing %ld padding slsamples for %f to "
|
||||||
|
"%f (samples %ld to %ld)\n",
|
||||||
|
aStream, toWrite, MediaTimeToSeconds(t),
|
||||||
|
MediaTimeToSeconds(end), offset, endTicksNeeded));
|
||||||
ticksWritten += toWrite;
|
ticksWritten += toWrite;
|
||||||
}
|
}
|
||||||
output.ApplyVolume(volume);
|
output.ApplyVolume(volume);
|
||||||
|
@ -1789,7 +1857,7 @@ MediaStreamGraphImpl::EnsureStableStateEventPosted()
|
||||||
void
|
void
|
||||||
MediaStreamGraphImpl::AppendMessage(ControlMessage* aMessage)
|
MediaStreamGraphImpl::AppendMessage(ControlMessage* aMessage)
|
||||||
{
|
{
|
||||||
NS_ASSERTION(NS_IsMainThread(), "main thread only");
|
MOZ_ASSERT(NS_IsMainThread(), "main thread only");
|
||||||
NS_ASSERTION(!aMessage->GetStream() ||
|
NS_ASSERTION(!aMessage->GetStream() ||
|
||||||
!aMessage->GetStream()->IsDestroyed(),
|
!aMessage->GetStream()->IsDestroyed(),
|
||||||
"Stream already destroyed");
|
"Stream already destroyed");
|
||||||
|
@ -2148,6 +2216,46 @@ MediaStream::ChangeExplicitBlockerCount(int32_t aDelta)
|
||||||
GraphImpl()->AppendMessage(new Message(this, aDelta));
|
GraphImpl()->AppendMessage(new Message(this, aDelta));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStream::BlockStreamIfNeeded()
|
||||||
|
{
|
||||||
|
class Message : public ControlMessage {
|
||||||
|
public:
|
||||||
|
explicit Message(MediaStream* aStream) : ControlMessage(aStream)
|
||||||
|
{ }
|
||||||
|
virtual void Run()
|
||||||
|
{
|
||||||
|
mStream->BlockStreamIfNeededImpl(
|
||||||
|
mStream->GraphImpl()->CurrentDriver()->StateComputedTime());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if (mMainThreadDestroyed) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
GraphImpl()->AppendMessage(new Message(this));
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStream::UnblockStreamIfNeeded()
|
||||||
|
{
|
||||||
|
class Message : public ControlMessage {
|
||||||
|
public:
|
||||||
|
explicit Message(MediaStream* aStream) : ControlMessage(aStream)
|
||||||
|
{ }
|
||||||
|
virtual void Run()
|
||||||
|
{
|
||||||
|
mStream->UnblockStreamIfNeededImpl(
|
||||||
|
mStream->GraphImpl()->CurrentDriver()->StateComputedTime());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if (mMainThreadDestroyed) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
GraphImpl()->AppendMessage(new Message(this));
|
||||||
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
MediaStream::AddListenerImpl(already_AddRefed<MediaStreamListener> aListener)
|
MediaStream::AddListenerImpl(already_AddRefed<MediaStreamListener> aListener)
|
||||||
{
|
{
|
||||||
|
@ -3031,7 +3139,8 @@ MediaStreamGraph::CreateAudioNodeExternalInputStream(AudioNodeEngine* aEngine, T
|
||||||
if (!aSampleRate) {
|
if (!aSampleRate) {
|
||||||
aSampleRate = aEngine->NodeMainThread()->Context()->SampleRate();
|
aSampleRate = aEngine->NodeMainThread()->Context()->SampleRate();
|
||||||
}
|
}
|
||||||
AudioNodeExternalInputStream* stream = new AudioNodeExternalInputStream(aEngine, aSampleRate);
|
AudioNodeExternalInputStream* stream = new AudioNodeExternalInputStream(
|
||||||
|
aEngine, aSampleRate, aEngine->NodeMainThread()->Context()->Id());
|
||||||
NS_ADDREF(stream);
|
NS_ADDREF(stream);
|
||||||
MediaStreamGraphImpl* graph = static_cast<MediaStreamGraphImpl*>(this);
|
MediaStreamGraphImpl* graph = static_cast<MediaStreamGraphImpl*>(this);
|
||||||
stream->SetGraphImpl(graph);
|
stream->SetGraphImpl(graph);
|
||||||
|
@ -3048,7 +3157,12 @@ MediaStreamGraph::CreateAudioNodeStream(AudioNodeEngine* aEngine,
|
||||||
if (!aSampleRate) {
|
if (!aSampleRate) {
|
||||||
aSampleRate = aEngine->NodeMainThread()->Context()->SampleRate();
|
aSampleRate = aEngine->NodeMainThread()->Context()->SampleRate();
|
||||||
}
|
}
|
||||||
AudioNodeStream* stream = new AudioNodeStream(aEngine, aKind, aSampleRate);
|
// MediaRecorders use an AudioNodeStream, but no AudioNode
|
||||||
|
AudioNode* node = aEngine->NodeMainThread();
|
||||||
|
dom::AudioContext::AudioContextId contextIdForStream = node ? node->Context()->Id() :
|
||||||
|
NO_AUDIO_CONTEXT;
|
||||||
|
AudioNodeStream* stream = new AudioNodeStream(aEngine, aKind, aSampleRate,
|
||||||
|
contextIdForStream);
|
||||||
NS_ADDREF(stream);
|
NS_ADDREF(stream);
|
||||||
MediaStreamGraphImpl* graph = static_cast<MediaStreamGraphImpl*>(this);
|
MediaStreamGraphImpl* graph = static_cast<MediaStreamGraphImpl*>(this);
|
||||||
stream->SetGraphImpl(graph);
|
stream->SetGraphImpl(graph);
|
||||||
|
@ -3061,6 +3175,273 @@ MediaStreamGraph::CreateAudioNodeStream(AudioNodeEngine* aEngine,
|
||||||
return stream;
|
return stream;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
class GraphStartedRunnable final : public nsRunnable
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
GraphStartedRunnable(AudioNodeStream* aStream, MediaStreamGraph* aGraph)
|
||||||
|
: mStream(aStream)
|
||||||
|
, mGraph(aGraph)
|
||||||
|
{ }
|
||||||
|
|
||||||
|
NS_IMETHOD Run() {
|
||||||
|
mGraph->NotifyWhenGraphStarted(mStream);
|
||||||
|
return NS_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
nsRefPtr<AudioNodeStream> mStream;
|
||||||
|
MediaStreamGraph* mGraph;
|
||||||
|
};
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStreamGraph::NotifyWhenGraphStarted(AudioNodeStream* aStream)
|
||||||
|
{
|
||||||
|
class GraphStartedNotificationControlMessage : public ControlMessage
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
explicit GraphStartedNotificationControlMessage(AudioNodeStream* aStream)
|
||||||
|
: ControlMessage(aStream)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
virtual void Run()
|
||||||
|
{
|
||||||
|
// This runs on the graph thread, so when this runs, and the current
|
||||||
|
// driver is an AudioCallbackDriver, we know the audio hardware is
|
||||||
|
// started. If not, we are going to switch soon, keep reposting this
|
||||||
|
// ControlMessage.
|
||||||
|
MediaStreamGraphImpl* graphImpl = mStream->GraphImpl();
|
||||||
|
if (graphImpl->CurrentDriver()->AsAudioCallbackDriver()) {
|
||||||
|
nsCOMPtr<nsIRunnable> event = new dom::StateChangeTask(
|
||||||
|
mStream->AsAudioNodeStream(), nullptr, AudioContextState::Running);
|
||||||
|
NS_DispatchToMainThread(event);
|
||||||
|
} else {
|
||||||
|
nsCOMPtr<nsIRunnable> event = new GraphStartedRunnable(
|
||||||
|
mStream->AsAudioNodeStream(), mStream->Graph());
|
||||||
|
NS_DispatchToMainThread(event);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
virtual void RunDuringShutdown()
|
||||||
|
{
|
||||||
|
MOZ_ASSERT(false, "We should be reviving the graph?");
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
MediaStreamGraphImpl* graphImpl = static_cast<MediaStreamGraphImpl*>(this);
|
||||||
|
graphImpl->AppendMessage(new GraphStartedNotificationControlMessage(aStream));
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStreamGraphImpl::ResetVisitedStreamState()
|
||||||
|
{
|
||||||
|
// Reset the visited/consumed/blocked state of the streams.
|
||||||
|
nsTArray<MediaStream*>* runningAndSuspendedPair[2];
|
||||||
|
runningAndSuspendedPair[0] = &mStreams;
|
||||||
|
runningAndSuspendedPair[1] = &mSuspendedStreams;
|
||||||
|
|
||||||
|
for (uint32_t array = 0; array < 2; array++) {
|
||||||
|
for (uint32_t i = 0; i < runningAndSuspendedPair[array]->Length(); ++i) {
|
||||||
|
ProcessedMediaStream* ps =
|
||||||
|
(*runningAndSuspendedPair[array])[i]->AsProcessedStream();
|
||||||
|
if (ps) {
|
||||||
|
ps->mCycleMarker = NOT_VISITED;
|
||||||
|
ps->mIsConsumed = false;
|
||||||
|
ps->mInBlockingSet = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStreamGraphImpl::StreamSetForAudioContext(dom::AudioContext::AudioContextId aAudioContextId,
|
||||||
|
mozilla::LinkedList<MediaStream>& aStreamSet)
|
||||||
|
{
|
||||||
|
nsTArray<MediaStream*>* runningAndSuspendedPair[2];
|
||||||
|
runningAndSuspendedPair[0] = &mStreams;
|
||||||
|
runningAndSuspendedPair[1] = &mSuspendedStreams;
|
||||||
|
|
||||||
|
for (uint32_t array = 0; array < 2; array++) {
|
||||||
|
for (uint32_t i = 0; i < runningAndSuspendedPair[array]->Length(); ++i) {
|
||||||
|
MediaStream* stream = (*runningAndSuspendedPair[array])[i];
|
||||||
|
if (aAudioContextId == stream->AudioContextId()) {
|
||||||
|
aStreamSet.insertFront(stream);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStreamGraphImpl::MoveStreams(AudioContextOperation aAudioContextOperation,
|
||||||
|
mozilla::LinkedList<MediaStream>& aStreamSet)
|
||||||
|
{
|
||||||
|
// For our purpose, Suspend and Close are equivalent: we want to remove the
|
||||||
|
// streams from the set of streams that are going to be processed.
|
||||||
|
nsTArray<MediaStream*>& from =
|
||||||
|
aAudioContextOperation == AudioContextOperation::Resume ? mSuspendedStreams
|
||||||
|
: mStreams;
|
||||||
|
nsTArray<MediaStream*>& to =
|
||||||
|
aAudioContextOperation == AudioContextOperation::Resume ? mStreams
|
||||||
|
: mSuspendedStreams;
|
||||||
|
|
||||||
|
MediaStream* stream;
|
||||||
|
while ((stream = aStreamSet.getFirst())) {
|
||||||
|
// It is posible to not find the stream here, if there has been two
|
||||||
|
// suspend/resume/close calls in a row.
|
||||||
|
auto i = from.IndexOf(stream);
|
||||||
|
if (i != from.NoIndex) {
|
||||||
|
from.RemoveElementAt(i);
|
||||||
|
to.AppendElement(stream);
|
||||||
|
}
|
||||||
|
|
||||||
|
// If streams got added during a period where an AudioContext was suspended,
|
||||||
|
// set their buffer start time to the appropriate value now:
|
||||||
|
if (aAudioContextOperation == AudioContextOperation::Resume &&
|
||||||
|
stream->mBufferStartTime == START_TIME_DELAYED) {
|
||||||
|
stream->mBufferStartTime = IterationEnd();
|
||||||
|
}
|
||||||
|
|
||||||
|
stream->remove();
|
||||||
|
}
|
||||||
|
STREAM_LOG(PR_LOG_DEBUG, ("Moving streams between suspended and running"
|
||||||
|
"state: mStreams: %d, mSuspendedStreams: %d\n", mStreams.Length(),
|
||||||
|
mSuspendedStreams.Length()));
|
||||||
|
#ifdef DEBUG
|
||||||
|
// The intersection of the two arrays should be null.
|
||||||
|
for (uint32_t i = 0; i < mStreams.Length(); i++) {
|
||||||
|
for (uint32_t j = 0; j < mSuspendedStreams.Length(); j++) {
|
||||||
|
MOZ_ASSERT(
|
||||||
|
mStreams[i] != mSuspendedStreams[j],
|
||||||
|
"The suspended stream set and running stream set are not disjoint.");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStreamGraphImpl::AudioContextOperationCompleted(MediaStream* aStream,
|
||||||
|
void* aPromise,
|
||||||
|
AudioContextOperation aOperation)
|
||||||
|
{
|
||||||
|
// This can be called from the thread created to do cubeb operation, or the
|
||||||
|
// MSG thread. The pointers passed back here are refcounted, so are still
|
||||||
|
// alive.
|
||||||
|
MonitorAutoLock lock(mMonitor);
|
||||||
|
|
||||||
|
AudioContextState state;
|
||||||
|
switch (aOperation) {
|
||||||
|
case Suspend: state = AudioContextState::Suspended; break;
|
||||||
|
case Resume: state = AudioContextState::Running; break;
|
||||||
|
case Close: state = AudioContextState::Closed; break;
|
||||||
|
default: MOZ_CRASH("Not handled.");
|
||||||
|
}
|
||||||
|
|
||||||
|
nsCOMPtr<nsIRunnable> event = new dom::StateChangeTask(
|
||||||
|
aStream->AsAudioNodeStream(), aPromise, state);
|
||||||
|
NS_DispatchToMainThread(event);
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStreamGraphImpl::ApplyAudioContextOperationImpl(AudioNodeStream* aStream,
|
||||||
|
AudioContextOperation aOperation,
|
||||||
|
void* aPromise)
|
||||||
|
{
|
||||||
|
MOZ_ASSERT(CurrentDriver()->OnThread());
|
||||||
|
mozilla::LinkedList<MediaStream> streamSet;
|
||||||
|
|
||||||
|
SetStreamOrderDirty();
|
||||||
|
|
||||||
|
ResetVisitedStreamState();
|
||||||
|
|
||||||
|
StreamSetForAudioContext(aStream->AudioContextId(), streamSet);
|
||||||
|
|
||||||
|
MoveStreams(aOperation, streamSet);
|
||||||
|
MOZ_ASSERT(!streamSet.getFirst(),
|
||||||
|
"Streams should be removed from the list after having been moved.");
|
||||||
|
|
||||||
|
// If we have suspended the last AudioContext, and we don't have other
|
||||||
|
// streams that have audio, this graph will automatically switch to a
|
||||||
|
// SystemCallbackDriver, because it can't find a MediaStream that has an audio
|
||||||
|
// track. When resuming, force switching to an AudioCallbackDriver. It would
|
||||||
|
// have happened at the next iteration anyways, but doing this now save
|
||||||
|
// some time.
|
||||||
|
if (aOperation == AudioContextOperation::Resume) {
|
||||||
|
if (!CurrentDriver()->AsAudioCallbackDriver()) {
|
||||||
|
AudioCallbackDriver* driver = new AudioCallbackDriver(this);
|
||||||
|
driver->EnqueueStreamAndPromiseForOperation(aStream, aPromise, aOperation);
|
||||||
|
mMixer.AddCallback(driver);
|
||||||
|
CurrentDriver()->SwitchAtNextIteration(driver);
|
||||||
|
} else {
|
||||||
|
// We are resuming a context, but we are already using an
|
||||||
|
// AudioCallbackDriver, we can resolve the promise now.
|
||||||
|
AudioContextOperationCompleted(aStream, aPromise, aOperation);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Close, suspend: check if we are going to switch to a
|
||||||
|
// SystemAudioCallbackDriver, and pass the promise to the AudioCallbackDriver
|
||||||
|
// if that's the case, so it can notify the content.
|
||||||
|
// This is the same logic as in UpdateStreamOrder, but it's simpler to have it
|
||||||
|
// here as well so we don't have to store the Promise(s) on the Graph.
|
||||||
|
if (aOperation != AudioContextOperation::Resume) {
|
||||||
|
bool audioTrackPresent = false;
|
||||||
|
for (uint32_t i = 0; i < mStreams.Length(); ++i) {
|
||||||
|
MediaStream* stream = mStreams[i];
|
||||||
|
if (stream->AsAudioNodeStream()) {
|
||||||
|
audioTrackPresent = true;
|
||||||
|
}
|
||||||
|
for (StreamBuffer::TrackIter tracks(stream->GetStreamBuffer(), MediaSegment::AUDIO);
|
||||||
|
!tracks.IsEnded(); tracks.Next()) {
|
||||||
|
audioTrackPresent = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!audioTrackPresent && CurrentDriver()->AsAudioCallbackDriver()) {
|
||||||
|
CurrentDriver()->AsAudioCallbackDriver()->
|
||||||
|
EnqueueStreamAndPromiseForOperation(aStream, aPromise, aOperation);
|
||||||
|
|
||||||
|
SystemClockDriver* driver = new SystemClockDriver(this);
|
||||||
|
CurrentDriver()->SwitchAtNextIteration(driver);
|
||||||
|
} else {
|
||||||
|
// We are closing or suspending an AudioContext, but something else is
|
||||||
|
// using the audio stream, we can resolve the promise now.
|
||||||
|
AudioContextOperationCompleted(aStream, aPromise, aOperation);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
MediaStreamGraph::ApplyAudioContextOperation(AudioNodeStream* aNodeStream,
|
||||||
|
AudioContextOperation aOperation,
|
||||||
|
void* aPromise)
|
||||||
|
{
|
||||||
|
class AudioContextOperationControlMessage : public ControlMessage
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
AudioContextOperationControlMessage(AudioNodeStream* aStream,
|
||||||
|
AudioContextOperation aOperation,
|
||||||
|
void* aPromise)
|
||||||
|
: ControlMessage(aStream)
|
||||||
|
, mAudioContextOperation(aOperation)
|
||||||
|
, mPromise(aPromise)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
virtual void Run()
|
||||||
|
{
|
||||||
|
mStream->GraphImpl()->ApplyAudioContextOperationImpl(
|
||||||
|
mStream->AsAudioNodeStream(), mAudioContextOperation, mPromise);
|
||||||
|
}
|
||||||
|
virtual void RunDuringShutdown()
|
||||||
|
{
|
||||||
|
MOZ_ASSERT(false, "We should be reviving the graph?");
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
AudioContextOperation mAudioContextOperation;
|
||||||
|
void* mPromise;
|
||||||
|
};
|
||||||
|
|
||||||
|
MediaStreamGraphImpl* graphImpl = static_cast<MediaStreamGraphImpl*>(this);
|
||||||
|
graphImpl->AppendMessage(
|
||||||
|
new AudioContextOperationControlMessage(aNodeStream, aOperation, aPromise));
|
||||||
|
}
|
||||||
|
|
||||||
bool
|
bool
|
||||||
MediaStreamGraph::IsNonRealtime() const
|
MediaStreamGraph::IsNonRealtime() const
|
||||||
{
|
{
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
#include <speex/speex_resampler.h>
|
#include <speex/speex_resampler.h>
|
||||||
#include "mozilla/dom/AudioChannelBinding.h"
|
#include "mozilla/dom/AudioChannelBinding.h"
|
||||||
#include "DOMMediaStream.h"
|
#include "DOMMediaStream.h"
|
||||||
|
#include "AudioContext.h"
|
||||||
|
|
||||||
class nsIRunnable;
|
class nsIRunnable;
|
||||||
|
|
||||||
|
@ -318,6 +319,7 @@ public:
|
||||||
NS_INLINE_DECL_THREADSAFE_REFCOUNTING(MediaStream)
|
NS_INLINE_DECL_THREADSAFE_REFCOUNTING(MediaStream)
|
||||||
|
|
||||||
explicit MediaStream(DOMMediaStream* aWrapper);
|
explicit MediaStream(DOMMediaStream* aWrapper);
|
||||||
|
virtual dom::AudioContext::AudioContextId AudioContextId() const { return 0; }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
// Protected destructor, to discourage deletion outside of Release():
|
// Protected destructor, to discourage deletion outside of Release():
|
||||||
|
@ -364,6 +366,8 @@ public:
|
||||||
// Explicitly block. Useful for example if a media element is pausing
|
// Explicitly block. Useful for example if a media element is pausing
|
||||||
// and we need to stop its stream emitting its buffered data.
|
// and we need to stop its stream emitting its buffered data.
|
||||||
virtual void ChangeExplicitBlockerCount(int32_t aDelta);
|
virtual void ChangeExplicitBlockerCount(int32_t aDelta);
|
||||||
|
void BlockStreamIfNeeded();
|
||||||
|
void UnblockStreamIfNeeded();
|
||||||
// Events will be dispatched by calling methods of aListener.
|
// Events will be dispatched by calling methods of aListener.
|
||||||
virtual void AddListener(MediaStreamListener* aListener);
|
virtual void AddListener(MediaStreamListener* aListener);
|
||||||
virtual void RemoveListener(MediaStreamListener* aListener);
|
virtual void RemoveListener(MediaStreamListener* aListener);
|
||||||
|
@ -465,6 +469,22 @@ public:
|
||||||
{
|
{
|
||||||
mExplicitBlockerCount.SetAtAndAfter(aTime, mExplicitBlockerCount.GetAt(aTime) + aDelta);
|
mExplicitBlockerCount.SetAtAndAfter(aTime, mExplicitBlockerCount.GetAt(aTime) + aDelta);
|
||||||
}
|
}
|
||||||
|
void BlockStreamIfNeededImpl(GraphTime aTime)
|
||||||
|
{
|
||||||
|
bool blocked = mExplicitBlockerCount.GetAt(aTime) > 0;
|
||||||
|
if (blocked) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
ChangeExplicitBlockerCountImpl(aTime, 1);
|
||||||
|
}
|
||||||
|
void UnblockStreamIfNeededImpl(GraphTime aTime)
|
||||||
|
{
|
||||||
|
bool blocked = mExplicitBlockerCount.GetAt(aTime) > 0;
|
||||||
|
if (!blocked) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
ChangeExplicitBlockerCountImpl(aTime, -1);
|
||||||
|
}
|
||||||
void AddListenerImpl(already_AddRefed<MediaStreamListener> aListener);
|
void AddListenerImpl(already_AddRefed<MediaStreamListener> aListener);
|
||||||
void RemoveListenerImpl(MediaStreamListener* aListener);
|
void RemoveListenerImpl(MediaStreamListener* aListener);
|
||||||
void RemoveAllListenersImpl();
|
void RemoveAllListenersImpl();
|
||||||
|
@ -1227,6 +1247,21 @@ public:
|
||||||
CreateAudioNodeExternalInputStream(AudioNodeEngine* aEngine,
|
CreateAudioNodeExternalInputStream(AudioNodeEngine* aEngine,
|
||||||
TrackRate aSampleRate = 0);
|
TrackRate aSampleRate = 0);
|
||||||
|
|
||||||
|
/* From the main thread, ask the MSG to send back an event when the graph
|
||||||
|
* thread is running, and audio is being processed. */
|
||||||
|
void NotifyWhenGraphStarted(AudioNodeStream* aNodeStream);
|
||||||
|
/* From the main thread, suspend, resume or close an AudioContext.
|
||||||
|
* aNodeStream is the stream of the DestinationNode of the AudioContext.
|
||||||
|
*
|
||||||
|
* This can possibly pause the graph thread, releasing system resources, if
|
||||||
|
* all streams have been suspended/closed.
|
||||||
|
*
|
||||||
|
* When the operation is complete, aPromise is resolved.
|
||||||
|
*/
|
||||||
|
void ApplyAudioContextOperation(AudioNodeStream* aNodeStream,
|
||||||
|
dom::AudioContextOperation aState,
|
||||||
|
void * aPromise);
|
||||||
|
|
||||||
bool IsNonRealtime() const;
|
bool IsNonRealtime() const;
|
||||||
/**
|
/**
|
||||||
* Start processing non-realtime for a specific number of ticks.
|
* Start processing non-realtime for a specific number of ticks.
|
||||||
|
|
|
@ -248,6 +248,49 @@ public:
|
||||||
* Mark aStream and all its inputs (recursively) as consumed.
|
* Mark aStream and all its inputs (recursively) as consumed.
|
||||||
*/
|
*/
|
||||||
static void MarkConsumed(MediaStream* aStream);
|
static void MarkConsumed(MediaStream* aStream);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Given the Id of an AudioContext, return the set of all MediaStreams that
|
||||||
|
* are part of this context.
|
||||||
|
*/
|
||||||
|
void StreamSetForAudioContext(dom::AudioContext::AudioContextId aAudioContextId,
|
||||||
|
mozilla::LinkedList<MediaStream>& aStreamSet);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Called when a suspend/resume/close operation has been completed, on the
|
||||||
|
* graph thread.
|
||||||
|
*/
|
||||||
|
void AudioContextOperationCompleted(MediaStream* aStream,
|
||||||
|
void* aPromise,
|
||||||
|
dom::AudioContextOperation aOperation);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Apply and AudioContext operation (suspend/resume/closed), on the graph
|
||||||
|
* thread.
|
||||||
|
*/
|
||||||
|
void ApplyAudioContextOperationImpl(AudioNodeStream* aStream,
|
||||||
|
dom::AudioContextOperation aOperation,
|
||||||
|
void* aPromise);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Move streams from the mStreams to mSuspendedStream if suspending/closing an
|
||||||
|
* AudioContext, or the inverse when resuming an AudioContext.
|
||||||
|
*/
|
||||||
|
void MoveStreams(dom::AudioContextOperation aAudioContextOperation,
|
||||||
|
mozilla::LinkedList<MediaStream>& aStreamSet);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Reset some state about the streams before suspending them, or resuming
|
||||||
|
* them.
|
||||||
|
*/
|
||||||
|
void ResetVisitedStreamState();
|
||||||
|
|
||||||
|
/*
|
||||||
|
* True if a stream is suspended, that is, is not in mStreams, but in
|
||||||
|
* mSuspendedStream.
|
||||||
|
*/
|
||||||
|
bool StreamSuspended(MediaStream* aStream);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Sort mStreams so that every stream not in a cycle is after any streams
|
* Sort mStreams so that every stream not in a cycle is after any streams
|
||||||
* it depends on, and every stream in a cycle is marked as being in a cycle.
|
* it depends on, and every stream in a cycle is marked as being in a cycle.
|
||||||
|
@ -368,7 +411,10 @@ public:
|
||||||
/**
|
/**
|
||||||
* Returns true when there are no active streams.
|
* Returns true when there are no active streams.
|
||||||
*/
|
*/
|
||||||
bool IsEmpty() { return mStreams.IsEmpty() && mPortCount == 0; }
|
bool IsEmpty()
|
||||||
|
{
|
||||||
|
return mStreams.IsEmpty() && mSuspendedStreams.IsEmpty() && mPortCount == 0;
|
||||||
|
}
|
||||||
|
|
||||||
// For use by control messages, on graph thread only.
|
// For use by control messages, on graph thread only.
|
||||||
/**
|
/**
|
||||||
|
@ -487,6 +533,13 @@ public:
|
||||||
* unnecessary thread-safe refcount changes.
|
* unnecessary thread-safe refcount changes.
|
||||||
*/
|
*/
|
||||||
nsTArray<MediaStream*> mStreams;
|
nsTArray<MediaStream*> mStreams;
|
||||||
|
/**
|
||||||
|
* This stores MediaStreams that are part of suspended AudioContexts.
|
||||||
|
* mStreams and mSuspendStream are disjoint sets: a stream is either suspended
|
||||||
|
* or not suspended. Suspended streams are not ordered in UpdateStreamOrder,
|
||||||
|
* and are therefore not doing any processing.
|
||||||
|
*/
|
||||||
|
nsTArray<MediaStream*> mSuspendedStreams;
|
||||||
/**
|
/**
|
||||||
* Streams from mFirstCycleBreaker to the end of mStreams produce output
|
* Streams from mFirstCycleBreaker to the end of mStreams produce output
|
||||||
* before they receive input. They correspond to DelayNodes that are in
|
* before they receive input. They correspond to DelayNodes that are in
|
||||||
|
|
|
@ -24,10 +24,10 @@
|
||||||
#include "AudioNodeEngine.h"
|
#include "AudioNodeEngine.h"
|
||||||
#include "AudioNodeStream.h"
|
#include "AudioNodeStream.h"
|
||||||
#include "AudioNodeExternalInputStream.h"
|
#include "AudioNodeExternalInputStream.h"
|
||||||
|
#include "webaudio/MediaStreamAudioDestinationNode.h"
|
||||||
#include <algorithm>
|
#include <algorithm>
|
||||||
#include "DOMMediaStream.h"
|
#include "DOMMediaStream.h"
|
||||||
#include "GeckoProfiler.h"
|
#include "GeckoProfiler.h"
|
||||||
#include "mozilla/unused.h"
|
|
||||||
#ifdef MOZ_WEBRTC
|
#ifdef MOZ_WEBRTC
|
||||||
#include "AudioOutputObserver.h"
|
#include "AudioOutputObserver.h"
|
||||||
#endif
|
#endif
|
||||||
|
@ -274,6 +274,9 @@ TrackUnionStream::TrackUnionStream(DOMMediaStream* aWrapper) :
|
||||||
this, (long long)ticks, outputTrack->GetID()));
|
this, (long long)ticks, outputTrack->GetID()));
|
||||||
} else if (InMutedCycle()) {
|
} else if (InMutedCycle()) {
|
||||||
segment->AppendNullData(ticks);
|
segment->AppendNullData(ticks);
|
||||||
|
} else {
|
||||||
|
if (GraphImpl()->StreamSuspended(source)) {
|
||||||
|
segment->AppendNullData(aTo - aFrom);
|
||||||
} else {
|
} else {
|
||||||
MOZ_ASSERT(outputTrack->GetEnd() == GraphTimeToStreamTime(interval.mStart),
|
MOZ_ASSERT(outputTrack->GetEnd() == GraphTimeToStreamTime(interval.mStart),
|
||||||
"Samples missing");
|
"Samples missing");
|
||||||
|
@ -282,6 +285,7 @@ TrackUnionStream::TrackUnionStream(DOMMediaStream* aWrapper) :
|
||||||
std::min(inputTrackEndPoint, inputStart),
|
std::min(inputTrackEndPoint, inputStart),
|
||||||
std::min(inputTrackEndPoint, inputEnd));
|
std::min(inputTrackEndPoint, inputEnd));
|
||||||
}
|
}
|
||||||
|
}
|
||||||
ApplyTrackDisabling(outputTrack->GetID(), segment);
|
ApplyTrackDisabling(outputTrack->GetID(), segment);
|
||||||
for (uint32_t j = 0; j < mListeners.Length(); ++j) {
|
for (uint32_t j = 0; j < mListeners.Length(); ++j) {
|
||||||
MediaStreamListener* l = mListeners[j];
|
MediaStreamListener* l = mListeners[j];
|
||||||
|
|
|
@ -9,8 +9,8 @@
|
||||||
#include "nsPIDOMWindow.h"
|
#include "nsPIDOMWindow.h"
|
||||||
#include "mozilla/ErrorResult.h"
|
#include "mozilla/ErrorResult.h"
|
||||||
#include "mozilla/dom/AnalyserNode.h"
|
#include "mozilla/dom/AnalyserNode.h"
|
||||||
#include "mozilla/dom/AudioContextBinding.h"
|
|
||||||
#include "mozilla/dom/HTMLMediaElement.h"
|
#include "mozilla/dom/HTMLMediaElement.h"
|
||||||
|
#include "mozilla/dom/AudioContextBinding.h"
|
||||||
#include "mozilla/dom/OfflineAudioContextBinding.h"
|
#include "mozilla/dom/OfflineAudioContextBinding.h"
|
||||||
#include "mozilla/dom/OwningNonNull.h"
|
#include "mozilla/dom/OwningNonNull.h"
|
||||||
#include "MediaStreamGraph.h"
|
#include "MediaStreamGraph.h"
|
||||||
|
@ -42,6 +42,10 @@
|
||||||
namespace mozilla {
|
namespace mozilla {
|
||||||
namespace dom {
|
namespace dom {
|
||||||
|
|
||||||
|
// 0 is a special value that MediaStreams use to denote they are not part of a
|
||||||
|
// AudioContext.
|
||||||
|
static dom::AudioContext::AudioContextId gAudioContextId = 1;
|
||||||
|
|
||||||
NS_IMPL_CYCLE_COLLECTION_CLASS(AudioContext)
|
NS_IMPL_CYCLE_COLLECTION_CLASS(AudioContext)
|
||||||
|
|
||||||
NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(AudioContext)
|
NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(AudioContext)
|
||||||
|
@ -85,12 +89,15 @@ AudioContext::AudioContext(nsPIDOMWindow* aWindow,
|
||||||
uint32_t aLength,
|
uint32_t aLength,
|
||||||
float aSampleRate)
|
float aSampleRate)
|
||||||
: DOMEventTargetHelper(aWindow)
|
: DOMEventTargetHelper(aWindow)
|
||||||
|
, mId(gAudioContextId++)
|
||||||
, mSampleRate(GetSampleRateForAudioContext(aIsOffline, aSampleRate))
|
, mSampleRate(GetSampleRateForAudioContext(aIsOffline, aSampleRate))
|
||||||
|
, mAudioContextState(AudioContextState::Suspended)
|
||||||
, mNumberOfChannels(aNumberOfChannels)
|
, mNumberOfChannels(aNumberOfChannels)
|
||||||
, mNodeCount(0)
|
, mNodeCount(0)
|
||||||
, mIsOffline(aIsOffline)
|
, mIsOffline(aIsOffline)
|
||||||
, mIsStarted(!aIsOffline)
|
, mIsStarted(!aIsOffline)
|
||||||
, mIsShutDown(false)
|
, mIsShutDown(false)
|
||||||
|
, mCloseCalled(false)
|
||||||
{
|
{
|
||||||
aWindow->AddAudioContext(this);
|
aWindow->AddAudioContext(this);
|
||||||
|
|
||||||
|
@ -197,9 +204,22 @@ AudioContext::Constructor(const GlobalObject& aGlobal,
|
||||||
return object.forget();
|
return object.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<AudioBufferSourceNode>
|
bool AudioContext::CheckClosed(ErrorResult& aRv)
|
||||||
AudioContext::CreateBufferSource()
|
|
||||||
{
|
{
|
||||||
|
if (mAudioContextState == AudioContextState::Closed) {
|
||||||
|
aRv.Throw(NS_ERROR_DOM_INVALID_STATE_ERR);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
already_AddRefed<AudioBufferSourceNode>
|
||||||
|
AudioContext::CreateBufferSource(ErrorResult& aRv)
|
||||||
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<AudioBufferSourceNode> bufferNode =
|
nsRefPtr<AudioBufferSourceNode> bufferNode =
|
||||||
new AudioBufferSourceNode(this);
|
new AudioBufferSourceNode(this);
|
||||||
return bufferNode.forget();
|
return bufferNode.forget();
|
||||||
|
@ -247,6 +267,10 @@ AudioContext::CreateMediaStreamDestination(ErrorResult& aRv)
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<MediaStreamAudioDestinationNode> node =
|
nsRefPtr<MediaStreamAudioDestinationNode> node =
|
||||||
new MediaStreamAudioDestinationNode(this);
|
new MediaStreamAudioDestinationNode(this);
|
||||||
return node.forget();
|
return node.forget();
|
||||||
|
@ -266,6 +290,10 @@ AudioContext::CreateScriptProcessor(uint32_t aBufferSize,
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<ScriptProcessorNode> scriptProcessor =
|
nsRefPtr<ScriptProcessorNode> scriptProcessor =
|
||||||
new ScriptProcessorNode(this, aBufferSize, aNumberOfInputChannels,
|
new ScriptProcessorNode(this, aBufferSize, aNumberOfInputChannels,
|
||||||
aNumberOfOutputChannels);
|
aNumberOfOutputChannels);
|
||||||
|
@ -273,15 +301,23 @@ AudioContext::CreateScriptProcessor(uint32_t aBufferSize,
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<AnalyserNode>
|
already_AddRefed<AnalyserNode>
|
||||||
AudioContext::CreateAnalyser()
|
AudioContext::CreateAnalyser(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<AnalyserNode> analyserNode = new AnalyserNode(this);
|
nsRefPtr<AnalyserNode> analyserNode = new AnalyserNode(this);
|
||||||
return analyserNode.forget();
|
return analyserNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<StereoPannerNode>
|
already_AddRefed<StereoPannerNode>
|
||||||
AudioContext::CreateStereoPanner()
|
AudioContext::CreateStereoPanner(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<StereoPannerNode> stereoPannerNode = new StereoPannerNode(this);
|
nsRefPtr<StereoPannerNode> stereoPannerNode = new StereoPannerNode(this);
|
||||||
return stereoPannerNode.forget();
|
return stereoPannerNode.forget();
|
||||||
}
|
}
|
||||||
|
@ -300,6 +336,11 @@ AudioContext::CreateMediaElementSource(HTMLMediaElement& aMediaElement,
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<DOMMediaStream> stream = aMediaElement.MozCaptureStream(aRv,
|
nsRefPtr<DOMMediaStream> stream = aMediaElement.MozCaptureStream(aRv,
|
||||||
mDestination->Stream()->Graph());
|
mDestination->Stream()->Graph());
|
||||||
if (aRv.Failed()) {
|
if (aRv.Failed()) {
|
||||||
|
@ -318,21 +359,34 @@ AudioContext::CreateMediaStreamSource(DOMMediaStream& aMediaStream,
|
||||||
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<MediaStreamAudioSourceNode> mediaStreamAudioSourceNode =
|
nsRefPtr<MediaStreamAudioSourceNode> mediaStreamAudioSourceNode =
|
||||||
new MediaStreamAudioSourceNode(this, &aMediaStream);
|
new MediaStreamAudioSourceNode(this, &aMediaStream);
|
||||||
return mediaStreamAudioSourceNode.forget();
|
return mediaStreamAudioSourceNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<GainNode>
|
already_AddRefed<GainNode>
|
||||||
AudioContext::CreateGain()
|
AudioContext::CreateGain(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<GainNode> gainNode = new GainNode(this);
|
nsRefPtr<GainNode> gainNode = new GainNode(this);
|
||||||
return gainNode.forget();
|
return gainNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<WaveShaperNode>
|
already_AddRefed<WaveShaperNode>
|
||||||
AudioContext::CreateWaveShaper()
|
AudioContext::CreateWaveShaper(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<WaveShaperNode> waveShaperNode = new WaveShaperNode(this);
|
nsRefPtr<WaveShaperNode> waveShaperNode = new WaveShaperNode(this);
|
||||||
return waveShaperNode.forget();
|
return waveShaperNode.forget();
|
||||||
}
|
}
|
||||||
|
@ -340,25 +394,38 @@ AudioContext::CreateWaveShaper()
|
||||||
already_AddRefed<DelayNode>
|
already_AddRefed<DelayNode>
|
||||||
AudioContext::CreateDelay(double aMaxDelayTime, ErrorResult& aRv)
|
AudioContext::CreateDelay(double aMaxDelayTime, ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
if (aMaxDelayTime > 0. && aMaxDelayTime < 180.) {
|
if (aMaxDelayTime > 0. && aMaxDelayTime < 180.) {
|
||||||
nsRefPtr<DelayNode> delayNode = new DelayNode(this, aMaxDelayTime);
|
nsRefPtr<DelayNode> delayNode = new DelayNode(this, aMaxDelayTime);
|
||||||
return delayNode.forget();
|
return delayNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<PannerNode>
|
already_AddRefed<PannerNode>
|
||||||
AudioContext::CreatePanner()
|
AudioContext::CreatePanner(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<PannerNode> pannerNode = new PannerNode(this);
|
nsRefPtr<PannerNode> pannerNode = new PannerNode(this);
|
||||||
mPannerNodes.PutEntry(pannerNode);
|
mPannerNodes.PutEntry(pannerNode);
|
||||||
return pannerNode.forget();
|
return pannerNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<ConvolverNode>
|
already_AddRefed<ConvolverNode>
|
||||||
AudioContext::CreateConvolver()
|
AudioContext::CreateConvolver(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<ConvolverNode> convolverNode = new ConvolverNode(this);
|
nsRefPtr<ConvolverNode> convolverNode = new ConvolverNode(this);
|
||||||
return convolverNode.forget();
|
return convolverNode.forget();
|
||||||
}
|
}
|
||||||
|
@ -372,6 +439,10 @@ AudioContext::CreateChannelSplitter(uint32_t aNumberOfOutputs, ErrorResult& aRv)
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<ChannelSplitterNode> splitterNode =
|
nsRefPtr<ChannelSplitterNode> splitterNode =
|
||||||
new ChannelSplitterNode(this, aNumberOfOutputs);
|
new ChannelSplitterNode(this, aNumberOfOutputs);
|
||||||
return splitterNode.forget();
|
return splitterNode.forget();
|
||||||
|
@ -386,30 +457,46 @@ AudioContext::CreateChannelMerger(uint32_t aNumberOfInputs, ErrorResult& aRv)
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<ChannelMergerNode> mergerNode =
|
nsRefPtr<ChannelMergerNode> mergerNode =
|
||||||
new ChannelMergerNode(this, aNumberOfInputs);
|
new ChannelMergerNode(this, aNumberOfInputs);
|
||||||
return mergerNode.forget();
|
return mergerNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<DynamicsCompressorNode>
|
already_AddRefed<DynamicsCompressorNode>
|
||||||
AudioContext::CreateDynamicsCompressor()
|
AudioContext::CreateDynamicsCompressor(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<DynamicsCompressorNode> compressorNode =
|
nsRefPtr<DynamicsCompressorNode> compressorNode =
|
||||||
new DynamicsCompressorNode(this);
|
new DynamicsCompressorNode(this);
|
||||||
return compressorNode.forget();
|
return compressorNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<BiquadFilterNode>
|
already_AddRefed<BiquadFilterNode>
|
||||||
AudioContext::CreateBiquadFilter()
|
AudioContext::CreateBiquadFilter(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<BiquadFilterNode> filterNode =
|
nsRefPtr<BiquadFilterNode> filterNode =
|
||||||
new BiquadFilterNode(this);
|
new BiquadFilterNode(this);
|
||||||
return filterNode.forget();
|
return filterNode.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
already_AddRefed<OscillatorNode>
|
already_AddRefed<OscillatorNode>
|
||||||
AudioContext::CreateOscillator()
|
AudioContext::CreateOscillator(ErrorResult& aRv)
|
||||||
{
|
{
|
||||||
|
if (CheckClosed(aRv)) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
nsRefPtr<OscillatorNode> oscillatorNode =
|
nsRefPtr<OscillatorNode> oscillatorNode =
|
||||||
new OscillatorNode(this);
|
new OscillatorNode(this);
|
||||||
return oscillatorNode.forget();
|
return oscillatorNode.forget();
|
||||||
|
@ -597,22 +684,239 @@ AudioContext::Shutdown()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void
|
AudioContextState AudioContext::State() const
|
||||||
AudioContext::Suspend()
|
|
||||||
{
|
{
|
||||||
MediaStream* ds = DestinationStream();
|
return mAudioContextState;
|
||||||
if (ds) {
|
|
||||||
ds->ChangeExplicitBlockerCount(1);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void
|
StateChangeTask::StateChangeTask(AudioContext* aAudioContext,
|
||||||
AudioContext::Resume()
|
void* aPromise,
|
||||||
|
AudioContextState aNewState)
|
||||||
|
: mAudioContext(aAudioContext)
|
||||||
|
, mPromise(aPromise)
|
||||||
|
, mAudioNodeStream(nullptr)
|
||||||
|
, mNewState(aNewState)
|
||||||
{
|
{
|
||||||
|
MOZ_ASSERT(NS_IsMainThread(),
|
||||||
|
"This constructor should be used from the main thread.");
|
||||||
|
}
|
||||||
|
|
||||||
|
StateChangeTask::StateChangeTask(AudioNodeStream* aStream,
|
||||||
|
void* aPromise,
|
||||||
|
AudioContextState aNewState)
|
||||||
|
: mAudioContext(nullptr)
|
||||||
|
, mPromise(aPromise)
|
||||||
|
, mAudioNodeStream(aStream)
|
||||||
|
, mNewState(aNewState)
|
||||||
|
{
|
||||||
|
MOZ_ASSERT(!NS_IsMainThread(),
|
||||||
|
"This constructor should be used from the graph thread.");
|
||||||
|
}
|
||||||
|
|
||||||
|
NS_IMETHODIMP
|
||||||
|
StateChangeTask::Run()
|
||||||
|
{
|
||||||
|
MOZ_ASSERT(NS_IsMainThread());
|
||||||
|
|
||||||
|
if (!mAudioContext && !mAudioNodeStream) {
|
||||||
|
return NS_OK;
|
||||||
|
}
|
||||||
|
if (mAudioNodeStream) {
|
||||||
|
AudioNode* node = mAudioNodeStream->Engine()->NodeMainThread();
|
||||||
|
if (!node) {
|
||||||
|
return NS_OK;
|
||||||
|
}
|
||||||
|
mAudioContext = node->Context();
|
||||||
|
if (!mAudioContext) {
|
||||||
|
return NS_OK;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
mAudioContext->OnStateChanged(mPromise, mNewState);
|
||||||
|
// We have can't call Release() on the AudioContext on the MSG thread, so we
|
||||||
|
// unref it here, on the main thread.
|
||||||
|
mAudioContext = nullptr;
|
||||||
|
|
||||||
|
return NS_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* This runnable allows to fire the "statechange" event */
|
||||||
|
class OnStateChangeTask final : public nsRunnable
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
explicit OnStateChangeTask(AudioContext* aAudioContext)
|
||||||
|
: mAudioContext(aAudioContext)
|
||||||
|
{}
|
||||||
|
|
||||||
|
NS_IMETHODIMP
|
||||||
|
Run() override
|
||||||
|
{
|
||||||
|
nsCOMPtr<nsPIDOMWindow> parent = do_QueryInterface(mAudioContext->GetParentObject());
|
||||||
|
if (!parent) {
|
||||||
|
return NS_ERROR_FAILURE;
|
||||||
|
}
|
||||||
|
|
||||||
|
nsIDocument* doc = parent->GetExtantDoc();
|
||||||
|
if (!doc) {
|
||||||
|
return NS_ERROR_FAILURE;
|
||||||
|
}
|
||||||
|
|
||||||
|
return nsContentUtils::DispatchTrustedEvent(doc,
|
||||||
|
static_cast<DOMEventTargetHelper*>(mAudioContext),
|
||||||
|
NS_LITERAL_STRING("statechange"),
|
||||||
|
false, false);
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
nsRefPtr<AudioContext> mAudioContext;
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
void
|
||||||
|
AudioContext::OnStateChanged(void* aPromise, AudioContextState aNewState)
|
||||||
|
{
|
||||||
|
MOZ_ASSERT(NS_IsMainThread());
|
||||||
|
|
||||||
|
MOZ_ASSERT((mAudioContextState == AudioContextState::Suspended &&
|
||||||
|
aNewState == AudioContextState::Running) ||
|
||||||
|
(mAudioContextState == AudioContextState::Running &&
|
||||||
|
aNewState == AudioContextState::Suspended) ||
|
||||||
|
(mAudioContextState == AudioContextState::Running &&
|
||||||
|
aNewState == AudioContextState::Closed) ||
|
||||||
|
(mAudioContextState == AudioContextState::Suspended &&
|
||||||
|
aNewState == AudioContextState::Closed) ||
|
||||||
|
(mAudioContextState == aNewState),
|
||||||
|
"Invalid AudioContextState transition");
|
||||||
|
|
||||||
|
MOZ_ASSERT(
|
||||||
|
mIsOffline || aPromise || aNewState == AudioContextState::Running,
|
||||||
|
"We should have a promise here if this is a real-time AudioContext."
|
||||||
|
"Or this is the first time we switch to \"running\".");
|
||||||
|
|
||||||
|
if (aPromise) {
|
||||||
|
Promise* promise = reinterpret_cast<Promise*>(aPromise);
|
||||||
|
promise->MaybeResolve(JS::UndefinedHandleValue);
|
||||||
|
DebugOnly<bool> rv = mPromiseGripArray.RemoveElement(promise);
|
||||||
|
MOZ_ASSERT(rv, "Promise wasn't in the grip array?");
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mAudioContextState != aNewState) {
|
||||||
|
nsRefPtr<OnStateChangeTask> onStateChangeTask =
|
||||||
|
new OnStateChangeTask(this);
|
||||||
|
NS_DispatchToMainThread(onStateChangeTask);
|
||||||
|
}
|
||||||
|
|
||||||
|
mAudioContextState = aNewState;
|
||||||
|
}
|
||||||
|
|
||||||
|
already_AddRefed<Promise>
|
||||||
|
AudioContext::Suspend(ErrorResult& aRv)
|
||||||
|
{
|
||||||
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
||||||
|
nsRefPtr<Promise> promise;
|
||||||
|
promise = Promise::Create(parentObject, aRv);
|
||||||
|
if (aRv.Failed()) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
if (mIsOffline) {
|
||||||
|
promise->MaybeReject(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mAudioContextState == AudioContextState::Closed ||
|
||||||
|
mCloseCalled) {
|
||||||
|
promise->MaybeReject(NS_ERROR_DOM_INVALID_STATE_ERR);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mAudioContextState == AudioContextState::Suspended) {
|
||||||
|
promise->MaybeResolve(JS::UndefinedHandleValue);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
MediaStream* ds = DestinationStream();
|
MediaStream* ds = DestinationStream();
|
||||||
if (ds) {
|
if (ds) {
|
||||||
ds->ChangeExplicitBlockerCount(-1);
|
ds->BlockStreamIfNeeded();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mPromiseGripArray.AppendElement(promise);
|
||||||
|
Graph()->ApplyAudioContextOperation(DestinationStream()->AsAudioNodeStream(),
|
||||||
|
AudioContextOperation::Suspend, promise);
|
||||||
|
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
already_AddRefed<Promise>
|
||||||
|
AudioContext::Resume(ErrorResult& aRv)
|
||||||
|
{
|
||||||
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
||||||
|
nsRefPtr<Promise> promise;
|
||||||
|
promise = Promise::Create(parentObject, aRv);
|
||||||
|
if (aRv.Failed()) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mIsOffline) {
|
||||||
|
promise->MaybeReject(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mAudioContextState == AudioContextState::Closed ||
|
||||||
|
mCloseCalled) {
|
||||||
|
promise->MaybeReject(NS_ERROR_DOM_INVALID_STATE_ERR);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mAudioContextState == AudioContextState::Running) {
|
||||||
|
promise->MaybeResolve(JS::UndefinedHandleValue);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
MediaStream* ds = DestinationStream();
|
||||||
|
if (ds) {
|
||||||
|
ds->UnblockStreamIfNeeded();
|
||||||
|
}
|
||||||
|
|
||||||
|
mPromiseGripArray.AppendElement(promise);
|
||||||
|
Graph()->ApplyAudioContextOperation(DestinationStream()->AsAudioNodeStream(),
|
||||||
|
AudioContextOperation::Resume, promise);
|
||||||
|
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
already_AddRefed<Promise>
|
||||||
|
AudioContext::Close(ErrorResult& aRv)
|
||||||
|
{
|
||||||
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
||||||
|
nsRefPtr<Promise> promise;
|
||||||
|
promise = Promise::Create(parentObject, aRv);
|
||||||
|
if (aRv.Failed()) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mIsOffline) {
|
||||||
|
promise->MaybeReject(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (mAudioContextState == AudioContextState::Closed) {
|
||||||
|
promise->MaybeResolve(NS_ERROR_DOM_INVALID_STATE_ERR);
|
||||||
|
return promise.forget();
|
||||||
|
}
|
||||||
|
|
||||||
|
mCloseCalled = true;
|
||||||
|
|
||||||
|
mPromiseGripArray.AppendElement(promise);
|
||||||
|
Graph()->ApplyAudioContextOperation(DestinationStream()->AsAudioNodeStream(),
|
||||||
|
AudioContextOperation::Close, promise);
|
||||||
|
|
||||||
|
MediaStream* ds = DestinationStream();
|
||||||
|
if (ds) {
|
||||||
|
ds->BlockStreamIfNeeded();
|
||||||
|
}
|
||||||
|
|
||||||
|
return promise.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
|
@ -653,6 +957,9 @@ AudioContext::StartRendering(ErrorResult& aRv)
|
||||||
mIsStarted = true;
|
mIsStarted = true;
|
||||||
nsRefPtr<Promise> promise = Promise::Create(parentObject, aRv);
|
nsRefPtr<Promise> promise = Promise::Create(parentObject, aRv);
|
||||||
mDestination->StartRendering(promise);
|
mDestination->StartRendering(promise);
|
||||||
|
|
||||||
|
OnStateChanged(nullptr, AudioContextState::Running);
|
||||||
|
|
||||||
return promise.forget();
|
return promise.forget();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -35,9 +35,12 @@ class DOMMediaStream;
|
||||||
class ErrorResult;
|
class ErrorResult;
|
||||||
class MediaStream;
|
class MediaStream;
|
||||||
class MediaStreamGraph;
|
class MediaStreamGraph;
|
||||||
|
class AudioNodeEngine;
|
||||||
|
class AudioNodeStream;
|
||||||
|
|
||||||
namespace dom {
|
namespace dom {
|
||||||
|
|
||||||
|
enum class AudioContextState : uint32_t;
|
||||||
class AnalyserNode;
|
class AnalyserNode;
|
||||||
class AudioBuffer;
|
class AudioBuffer;
|
||||||
class AudioBufferSourceNode;
|
class AudioBufferSourceNode;
|
||||||
|
@ -64,6 +67,30 @@ class WaveShaperNode;
|
||||||
class PeriodicWave;
|
class PeriodicWave;
|
||||||
class Promise;
|
class Promise;
|
||||||
|
|
||||||
|
/* This runnable allows the MSG to notify the main thread when audio is actually
|
||||||
|
* flowing */
|
||||||
|
class StateChangeTask final : public nsRunnable
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
/* This constructor should be used when this event is sent from the main
|
||||||
|
* thread. */
|
||||||
|
StateChangeTask(AudioContext* aAudioContext, void* aPromise, AudioContextState aNewState);
|
||||||
|
|
||||||
|
/* This constructor should be used when this event is sent from the audio
|
||||||
|
* thread. */
|
||||||
|
StateChangeTask(AudioNodeStream* aStream, void* aPromise, AudioContextState aNewState);
|
||||||
|
|
||||||
|
NS_IMETHOD Run() override;
|
||||||
|
|
||||||
|
private:
|
||||||
|
nsRefPtr<AudioContext> mAudioContext;
|
||||||
|
void* mPromise;
|
||||||
|
nsRefPtr<AudioNodeStream> mAudioNodeStream;
|
||||||
|
AudioContextState mNewState;
|
||||||
|
};
|
||||||
|
|
||||||
|
enum AudioContextOperation { Suspend, Resume, Close };
|
||||||
|
|
||||||
class AudioContext final : public DOMEventTargetHelper,
|
class AudioContext final : public DOMEventTargetHelper,
|
||||||
public nsIMemoryReporter
|
public nsIMemoryReporter
|
||||||
{
|
{
|
||||||
|
@ -76,6 +103,8 @@ class AudioContext final : public DOMEventTargetHelper,
|
||||||
~AudioContext();
|
~AudioContext();
|
||||||
|
|
||||||
public:
|
public:
|
||||||
|
typedef uint64_t AudioContextId;
|
||||||
|
|
||||||
NS_DECL_ISUPPORTS_INHERITED
|
NS_DECL_ISUPPORTS_INHERITED
|
||||||
NS_DECL_CYCLE_COLLECTION_CLASS_INHERITED(AudioContext,
|
NS_DECL_CYCLE_COLLECTION_CLASS_INHERITED(AudioContext,
|
||||||
DOMEventTargetHelper)
|
DOMEventTargetHelper)
|
||||||
|
@ -87,8 +116,6 @@ public:
|
||||||
}
|
}
|
||||||
|
|
||||||
void Shutdown(); // idempotent
|
void Shutdown(); // idempotent
|
||||||
void Suspend();
|
|
||||||
void Resume();
|
|
||||||
|
|
||||||
virtual JSObject* WrapObject(JSContext* aCx, JS::Handle<JSObject*> aGivenProto) override;
|
virtual JSObject* WrapObject(JSContext* aCx, JS::Handle<JSObject*> aGivenProto) override;
|
||||||
|
|
||||||
|
@ -124,11 +151,31 @@ public:
|
||||||
return mSampleRate;
|
return mSampleRate;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
AudioContextId Id() const
|
||||||
|
{
|
||||||
|
return mId;
|
||||||
|
}
|
||||||
|
|
||||||
double CurrentTime() const;
|
double CurrentTime() const;
|
||||||
|
|
||||||
AudioListener* Listener();
|
AudioListener* Listener();
|
||||||
|
|
||||||
already_AddRefed<AudioBufferSourceNode> CreateBufferSource();
|
AudioContextState State() const;
|
||||||
|
// Those three methods return a promise to content, that is resolved when an
|
||||||
|
// (possibly long) operation is completed on the MSG (and possibly other)
|
||||||
|
// thread(s). To avoid having to match the calls and asychronous result when
|
||||||
|
// the operation is completed, we keep a reference to the promises on the main
|
||||||
|
// thread, and then send the promises pointers down the MSG thread, as a void*
|
||||||
|
// (to make it very clear that the pointer is to merely be treated as an ID).
|
||||||
|
// When back on the main thread, we can resolve or reject the promise, by
|
||||||
|
// casting it back to a `Promise*` while asserting we're back on the main
|
||||||
|
// thread and removing the reference we added.
|
||||||
|
already_AddRefed<Promise> Suspend(ErrorResult& aRv);
|
||||||
|
already_AddRefed<Promise> Resume(ErrorResult& aRv);
|
||||||
|
already_AddRefed<Promise> Close(ErrorResult& aRv);
|
||||||
|
IMPL_EVENT_HANDLER(statechange)
|
||||||
|
|
||||||
|
already_AddRefed<AudioBufferSourceNode> CreateBufferSource(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<AudioBuffer>
|
already_AddRefed<AudioBuffer>
|
||||||
CreateBuffer(JSContext* aJSContext, uint32_t aNumberOfChannels,
|
CreateBuffer(JSContext* aJSContext, uint32_t aNumberOfChannels,
|
||||||
|
@ -145,16 +192,16 @@ public:
|
||||||
ErrorResult& aRv);
|
ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<StereoPannerNode>
|
already_AddRefed<StereoPannerNode>
|
||||||
CreateStereoPanner();
|
CreateStereoPanner(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<AnalyserNode>
|
already_AddRefed<AnalyserNode>
|
||||||
CreateAnalyser();
|
CreateAnalyser(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<GainNode>
|
already_AddRefed<GainNode>
|
||||||
CreateGain();
|
CreateGain(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<WaveShaperNode>
|
already_AddRefed<WaveShaperNode>
|
||||||
CreateWaveShaper();
|
CreateWaveShaper(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<MediaElementAudioSourceNode>
|
already_AddRefed<MediaElementAudioSourceNode>
|
||||||
CreateMediaElementSource(HTMLMediaElement& aMediaElement, ErrorResult& aRv);
|
CreateMediaElementSource(HTMLMediaElement& aMediaElement, ErrorResult& aRv);
|
||||||
|
@ -165,10 +212,10 @@ public:
|
||||||
CreateDelay(double aMaxDelayTime, ErrorResult& aRv);
|
CreateDelay(double aMaxDelayTime, ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<PannerNode>
|
already_AddRefed<PannerNode>
|
||||||
CreatePanner();
|
CreatePanner(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<ConvolverNode>
|
already_AddRefed<ConvolverNode>
|
||||||
CreateConvolver();
|
CreateConvolver(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<ChannelSplitterNode>
|
already_AddRefed<ChannelSplitterNode>
|
||||||
CreateChannelSplitter(uint32_t aNumberOfOutputs, ErrorResult& aRv);
|
CreateChannelSplitter(uint32_t aNumberOfOutputs, ErrorResult& aRv);
|
||||||
|
@ -177,13 +224,13 @@ public:
|
||||||
CreateChannelMerger(uint32_t aNumberOfInputs, ErrorResult& aRv);
|
CreateChannelMerger(uint32_t aNumberOfInputs, ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<DynamicsCompressorNode>
|
already_AddRefed<DynamicsCompressorNode>
|
||||||
CreateDynamicsCompressor();
|
CreateDynamicsCompressor(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<BiquadFilterNode>
|
already_AddRefed<BiquadFilterNode>
|
||||||
CreateBiquadFilter();
|
CreateBiquadFilter(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<OscillatorNode>
|
already_AddRefed<OscillatorNode>
|
||||||
CreateOscillator();
|
CreateOscillator(ErrorResult& aRv);
|
||||||
|
|
||||||
already_AddRefed<PeriodicWave>
|
already_AddRefed<PeriodicWave>
|
||||||
CreatePeriodicWave(const Float32Array& aRealData, const Float32Array& aImagData,
|
CreatePeriodicWave(const Float32Array& aRealData, const Float32Array& aImagData,
|
||||||
|
@ -244,6 +291,8 @@ public:
|
||||||
return aTime + ExtraCurrentTime();
|
return aTime + ExtraCurrentTime();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void OnStateChanged(void* aPromise, AudioContextState aNewState);
|
||||||
|
|
||||||
IMPL_EVENT_HANDLER(mozinterruptbegin)
|
IMPL_EVENT_HANDLER(mozinterruptbegin)
|
||||||
IMPL_EVENT_HANDLER(mozinterruptend)
|
IMPL_EVENT_HANDLER(mozinterruptend)
|
||||||
|
|
||||||
|
@ -266,13 +315,23 @@ private:
|
||||||
|
|
||||||
friend struct ::mozilla::WebAudioDecodeJob;
|
friend struct ::mozilla::WebAudioDecodeJob;
|
||||||
|
|
||||||
|
bool CheckClosed(ErrorResult& aRv);
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
// Each AudioContext has an id, that is passed down the MediaStreams that
|
||||||
|
// back the AudioNodes, so we can easily compute the set of all the
|
||||||
|
// MediaStreams for a given context, on the MediasStreamGraph side.
|
||||||
|
const AudioContextId mId;
|
||||||
// Note that it's important for mSampleRate to be initialized before
|
// Note that it's important for mSampleRate to be initialized before
|
||||||
// mDestination, as mDestination's constructor needs to access it!
|
// mDestination, as mDestination's constructor needs to access it!
|
||||||
const float mSampleRate;
|
const float mSampleRate;
|
||||||
|
AudioContextState mAudioContextState;
|
||||||
nsRefPtr<AudioDestinationNode> mDestination;
|
nsRefPtr<AudioDestinationNode> mDestination;
|
||||||
nsRefPtr<AudioListener> mListener;
|
nsRefPtr<AudioListener> mListener;
|
||||||
nsTArray<nsRefPtr<WebAudioDecodeJob> > mDecodeJobs;
|
nsTArray<nsRefPtr<WebAudioDecodeJob> > mDecodeJobs;
|
||||||
|
// This array is used to keep the suspend/resume/close promises alive until
|
||||||
|
// they are resolved, so we can safely pass them accross threads.
|
||||||
|
nsTArray<nsRefPtr<Promise>> mPromiseGripArray;
|
||||||
// See RegisterActiveNode. These will keep the AudioContext alive while it
|
// See RegisterActiveNode. These will keep the AudioContext alive while it
|
||||||
// is rendering and the window remains alive.
|
// is rendering and the window remains alive.
|
||||||
nsTHashtable<nsRefPtrHashKey<AudioNode> > mActiveNodes;
|
nsTHashtable<nsRefPtrHashKey<AudioNode> > mActiveNodes;
|
||||||
|
@ -286,8 +345,12 @@ private:
|
||||||
bool mIsOffline;
|
bool mIsOffline;
|
||||||
bool mIsStarted;
|
bool mIsStarted;
|
||||||
bool mIsShutDown;
|
bool mIsShutDown;
|
||||||
|
// Close has been called, reject suspend and resume call.
|
||||||
|
bool mCloseCalled;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const dom::AudioContext::AudioContextId NO_AUDIO_CONTEXT = 0;
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -5,6 +5,7 @@
|
||||||
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
|
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
|
||||||
|
|
||||||
#include "AudioDestinationNode.h"
|
#include "AudioDestinationNode.h"
|
||||||
|
#include "AudioContext.h"
|
||||||
#include "mozilla/dom/AudioDestinationNodeBinding.h"
|
#include "mozilla/dom/AudioDestinationNodeBinding.h"
|
||||||
#include "mozilla/dom/ScriptSettings.h"
|
#include "mozilla/dom/ScriptSettings.h"
|
||||||
#include "mozilla/Preferences.h"
|
#include "mozilla/Preferences.h"
|
||||||
|
@ -176,9 +177,11 @@ public:
|
||||||
|
|
||||||
aNode->ResolvePromise(renderedBuffer);
|
aNode->ResolvePromise(renderedBuffer);
|
||||||
|
|
||||||
nsRefPtr<OnCompleteTask> task =
|
nsRefPtr<OnCompleteTask> onCompleteTask =
|
||||||
new OnCompleteTask(context, renderedBuffer);
|
new OnCompleteTask(context, renderedBuffer);
|
||||||
NS_DispatchToMainThread(task);
|
NS_DispatchToMainThread(onCompleteTask);
|
||||||
|
|
||||||
|
context->OnStateChanged(nullptr, AudioContextState::Closed);
|
||||||
}
|
}
|
||||||
|
|
||||||
virtual size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const override
|
virtual size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const override
|
||||||
|
@ -367,6 +370,10 @@ AudioDestinationNode::AudioDestinationNode(AudioContext* aContext,
|
||||||
mStream->AddMainThreadListener(this);
|
mStream->AddMainThreadListener(this);
|
||||||
mStream->AddAudioOutput(&gWebAudioOutputKey);
|
mStream->AddAudioOutput(&gWebAudioOutputKey);
|
||||||
|
|
||||||
|
if (!aIsOffline) {
|
||||||
|
graph->NotifyWhenGraphStarted(mStream->AsAudioNodeStream());
|
||||||
|
}
|
||||||
|
|
||||||
if (aChannel != AudioChannel::Normal) {
|
if (aChannel != AudioChannel::Normal) {
|
||||||
ErrorResult rv;
|
ErrorResult rv;
|
||||||
SetMozAudioChannelType(aChannel, rv);
|
SetMozAudioChannelType(aChannel, rv);
|
||||||
|
|
|
@ -12,8 +12,8 @@ using namespace mozilla::dom;
|
||||||
|
|
||||||
namespace mozilla {
|
namespace mozilla {
|
||||||
|
|
||||||
AudioNodeExternalInputStream::AudioNodeExternalInputStream(AudioNodeEngine* aEngine, TrackRate aSampleRate)
|
AudioNodeExternalInputStream::AudioNodeExternalInputStream(AudioNodeEngine* aEngine, TrackRate aSampleRate, uint32_t aContextId)
|
||||||
: AudioNodeStream(aEngine, MediaStreamGraph::INTERNAL_STREAM, aSampleRate)
|
: AudioNodeStream(aEngine, MediaStreamGraph::INTERNAL_STREAM, aSampleRate, aContextId)
|
||||||
{
|
{
|
||||||
MOZ_COUNT_CTOR(AudioNodeExternalInputStream);
|
MOZ_COUNT_CTOR(AudioNodeExternalInputStream);
|
||||||
}
|
}
|
||||||
|
|
|
@ -20,7 +20,7 @@ namespace mozilla {
|
||||||
*/
|
*/
|
||||||
class AudioNodeExternalInputStream : public AudioNodeStream {
|
class AudioNodeExternalInputStream : public AudioNodeStream {
|
||||||
public:
|
public:
|
||||||
AudioNodeExternalInputStream(AudioNodeEngine* aEngine, TrackRate aSampleRate);
|
AudioNodeExternalInputStream(AudioNodeEngine* aEngine, TrackRate aSampleRate, uint32_t aContextId);
|
||||||
protected:
|
protected:
|
||||||
~AudioNodeExternalInputStream();
|
~AudioNodeExternalInputStream();
|
||||||
|
|
||||||
|
|
|
@ -27,10 +27,12 @@ namespace mozilla {
|
||||||
|
|
||||||
AudioNodeStream::AudioNodeStream(AudioNodeEngine* aEngine,
|
AudioNodeStream::AudioNodeStream(AudioNodeEngine* aEngine,
|
||||||
MediaStreamGraph::AudioNodeStreamKind aKind,
|
MediaStreamGraph::AudioNodeStreamKind aKind,
|
||||||
TrackRate aSampleRate)
|
TrackRate aSampleRate,
|
||||||
|
AudioContext::AudioContextId aContextId)
|
||||||
: ProcessedMediaStream(nullptr),
|
: ProcessedMediaStream(nullptr),
|
||||||
mEngine(aEngine),
|
mEngine(aEngine),
|
||||||
mSampleRate(aSampleRate),
|
mSampleRate(aSampleRate),
|
||||||
|
mAudioContextId(aContextId),
|
||||||
mKind(aKind),
|
mKind(aKind),
|
||||||
mNumberOfInputChannels(2),
|
mNumberOfInputChannels(2),
|
||||||
mMarkAsFinishedAfterThisBlock(false),
|
mMarkAsFinishedAfterThisBlock(false),
|
||||||
|
|
|
@ -47,7 +47,8 @@ public:
|
||||||
*/
|
*/
|
||||||
AudioNodeStream(AudioNodeEngine* aEngine,
|
AudioNodeStream(AudioNodeEngine* aEngine,
|
||||||
MediaStreamGraph::AudioNodeStreamKind aKind,
|
MediaStreamGraph::AudioNodeStreamKind aKind,
|
||||||
TrackRate aSampleRate);
|
TrackRate aSampleRate,
|
||||||
|
AudioContext::AudioContextId aContextId);
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
~AudioNodeStream();
|
~AudioNodeStream();
|
||||||
|
@ -121,6 +122,7 @@ public:
|
||||||
// Any thread
|
// Any thread
|
||||||
AudioNodeEngine* Engine() { return mEngine; }
|
AudioNodeEngine* Engine() { return mEngine; }
|
||||||
TrackRate SampleRate() const { return mSampleRate; }
|
TrackRate SampleRate() const { return mSampleRate; }
|
||||||
|
AudioContext::AudioContextId AudioContextId() const override { return mAudioContextId; }
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Convert a time in seconds on the destination stream to ticks
|
* Convert a time in seconds on the destination stream to ticks
|
||||||
|
@ -147,6 +149,7 @@ public:
|
||||||
void SizeOfAudioNodesIncludingThis(MallocSizeOf aMallocSizeOf,
|
void SizeOfAudioNodesIncludingThis(MallocSizeOf aMallocSizeOf,
|
||||||
AudioNodeSizes& aUsage) const;
|
AudioNodeSizes& aUsage) const;
|
||||||
|
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
void AdvanceOutputSegment();
|
void AdvanceOutputSegment();
|
||||||
void FinishOutput();
|
void FinishOutput();
|
||||||
|
@ -166,8 +169,11 @@ protected:
|
||||||
OutputChunks mLastChunks;
|
OutputChunks mLastChunks;
|
||||||
// The stream's sampling rate
|
// The stream's sampling rate
|
||||||
const TrackRate mSampleRate;
|
const TrackRate mSampleRate;
|
||||||
|
// This is necessary to be able to find all the nodes for a given
|
||||||
|
// AudioContext. It is set on the main thread, in the constructor.
|
||||||
|
const AudioContext::AudioContextId mAudioContextId;
|
||||||
// Whether this is an internal or external stream
|
// Whether this is an internal or external stream
|
||||||
MediaStreamGraph::AudioNodeStreamKind mKind;
|
const MediaStreamGraph::AudioNodeStreamKind mKind;
|
||||||
// The number of input channels that this stream requires. 0 means don't care.
|
// The number of input channels that this stream requires. 0 means don't care.
|
||||||
uint32_t mNumberOfInputChannels;
|
uint32_t mNumberOfInputChannels;
|
||||||
// The mixing modes
|
// The mixing modes
|
||||||
|
|
|
@ -35,6 +35,7 @@ public:
|
||||||
NS_ERROR("MediaStreamAudioSourceNodeEngine bad parameter index");
|
NS_ERROR("MediaStreamAudioSourceNodeEngine bad parameter index");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
bool mEnabled;
|
bool mEnabled;
|
||||||
};
|
};
|
||||||
|
|
|
@ -31,6 +31,7 @@ EXPORTS += [
|
||||||
|
|
||||||
EXPORTS.mozilla += [
|
EXPORTS.mozilla += [
|
||||||
'FFTBlock.h',
|
'FFTBlock.h',
|
||||||
|
'MediaStreamAudioDestinationNode.h',
|
||||||
]
|
]
|
||||||
|
|
||||||
EXPORTS.mozilla.dom += [
|
EXPORTS.mozilla.dom += [
|
||||||
|
|
|
@ -44,6 +44,8 @@ skip-if = (toolkit == 'android' && (processor == 'x86' || debug)) || os == 'win'
|
||||||
skip-if = (toolkit == 'gonk') || (toolkit == 'android') || debug #bug 906752
|
skip-if = (toolkit == 'gonk') || (toolkit == 'android') || debug #bug 906752
|
||||||
[test_audioBufferSourceNodePassThrough.html]
|
[test_audioBufferSourceNodePassThrough.html]
|
||||||
[test_AudioContext.html]
|
[test_AudioContext.html]
|
||||||
|
skip-if = android_version == '10' # bug 1138462
|
||||||
|
[test_audioContextSuspendResumeClose.html]
|
||||||
[test_audioDestinationNode.html]
|
[test_audioDestinationNode.html]
|
||||||
[test_AudioListener.html]
|
[test_AudioListener.html]
|
||||||
[test_audioParamExponentialRamp.html]
|
[test_audioParamExponentialRamp.html]
|
||||||
|
|
|
@ -0,0 +1,400 @@
|
||||||
|
<!DOCTYPE HTML>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<title>Test suspend, resume and close method of the AudioContext</title>
|
||||||
|
<script type="text/javascript" src="/tests/SimpleTest/SimpleTest.js"></script>
|
||||||
|
<script type="text/javascript" src="webaudio.js"></script>
|
||||||
|
<link rel="stylesheet" type="text/css" href="/tests/SimpleTest/test.css" />
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<pre id="test">
|
||||||
|
<script class="testbody" type="text/javascript">
|
||||||
|
|
||||||
|
SimpleTest.requestCompleteLog();
|
||||||
|
|
||||||
|
function tryToToCreateNodeOnClosedContext(ctx) {
|
||||||
|
ok(ctx.state, "closed", "The context is in closed state");
|
||||||
|
|
||||||
|
[ { name: "createBufferSource" },
|
||||||
|
{ name: "createMediaStreamDestination",
|
||||||
|
onOfflineAudioContext: false},
|
||||||
|
{ name: "createScriptProcessor" },
|
||||||
|
{ name: "createStereoPanner" },
|
||||||
|
{ name: "createAnalyser" },
|
||||||
|
{ name: "createGain" },
|
||||||
|
{ name: "createDelay" },
|
||||||
|
{ name: "createBiquadFilter" },
|
||||||
|
{ name: "createWaveShaper" },
|
||||||
|
{ name: "createPanner" },
|
||||||
|
{ name: "createConvolver" },
|
||||||
|
{ name: "createChannelSplitter" },
|
||||||
|
{ name: "createChannelMerger" },
|
||||||
|
{ name: "createDynamicsCompressor" },
|
||||||
|
{ name: "createOscillator" },
|
||||||
|
{ name: "createMediaElementSource",
|
||||||
|
args: [new Audio()],
|
||||||
|
onOfflineAudioContext: false },
|
||||||
|
{ name: "createMediaStreamSource",
|
||||||
|
args: [new Audio().mozCaptureStream()],
|
||||||
|
onOfflineAudioContext: false } ].forEach(function(e) {
|
||||||
|
|
||||||
|
if (e.onOfflineAudioContext == false &&
|
||||||
|
ctx instanceof OfflineAudioContext) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
expectException(function() {
|
||||||
|
ctx[e.name].apply(ctx, e.args);
|
||||||
|
}, DOMException.INVALID_STATE_ERR);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function loadFile(url, callback) {
|
||||||
|
var xhr = new XMLHttpRequest();
|
||||||
|
xhr.open("GET", url, true);
|
||||||
|
xhr.responseType = "arraybuffer";
|
||||||
|
xhr.onload = function() {
|
||||||
|
callback(xhr.response);
|
||||||
|
};
|
||||||
|
xhr.send();
|
||||||
|
}
|
||||||
|
|
||||||
|
// createBuffer, createPeriodicWave and decodeAudioData should work on a context
|
||||||
|
// that has `state` == "closed"
|
||||||
|
function tryLegalOpeerationsOnClosedContext(ctx) {
|
||||||
|
ok(ctx.state, "closed", "The context is in closed state");
|
||||||
|
|
||||||
|
[ { name: "createBuffer",
|
||||||
|
args: [1, 44100, 44100] },
|
||||||
|
{ name: "createPeriodicWave",
|
||||||
|
args: [new Float32Array(10), new Float32Array(10)] }
|
||||||
|
].forEach(function(e) {
|
||||||
|
expectNoException(function() {
|
||||||
|
ctx[e.name].apply(ctx, e.args);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
loadFile("ting-44.1k-1ch.ogg", function(buf) {
|
||||||
|
ctx.decodeAudioData(buf).then(function(decodedBuf) {
|
||||||
|
ok(true, "decodeAudioData on a closed context should work, it did.")
|
||||||
|
todo(false, "0 " + (ctx instanceof OfflineAudioContext ? "Offline" : "Realtime"));
|
||||||
|
finish();
|
||||||
|
}).catch(function(e){
|
||||||
|
ok(false, "decodeAudioData on a closed context should work, it did not");
|
||||||
|
finish();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that MediaStreams that are the output of a suspended AudioContext are
|
||||||
|
// producing silence
|
||||||
|
// ac1 produce a sine fed to a MediaStreamAudioDestinationNode
|
||||||
|
// ac2 is connected to ac1 with a MediaStreamAudioSourceNode, and check that
|
||||||
|
// there is silence when ac1 is suspended
|
||||||
|
function testMultiContextOutput() {
|
||||||
|
var ac1 = new AudioContext(),
|
||||||
|
ac2 = new AudioContext();
|
||||||
|
|
||||||
|
var osc1 = ac1.createOscillator(),
|
||||||
|
mediaStreamDestination1 = ac1.createMediaStreamDestination();
|
||||||
|
|
||||||
|
var mediaStreamAudioSourceNode2 =
|
||||||
|
ac2.createMediaStreamSource(mediaStreamDestination1.stream),
|
||||||
|
sp2 = ac2.createScriptProcessor(),
|
||||||
|
suspendCalled = false,
|
||||||
|
silentBuffersInARow = 0;
|
||||||
|
|
||||||
|
|
||||||
|
sp2.onaudioprocess = function(e) {
|
||||||
|
if (!suspendCalled) {
|
||||||
|
ac1.suspend();
|
||||||
|
suspendCalled = true;
|
||||||
|
} else {
|
||||||
|
// Wait until the context that produce the tone is actually suspended. It
|
||||||
|
// can be that the second context receives a little amount of data because
|
||||||
|
// of the buffering between the two contexts.
|
||||||
|
if (ac1.state == "suspended") {
|
||||||
|
var input = e.inputBuffer.getChannelData(0);
|
||||||
|
var silent = true;
|
||||||
|
for (var i = 0; i < input.length; i++) {
|
||||||
|
if (input[i] != 0.0) {
|
||||||
|
silent = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (silent) {
|
||||||
|
silentBuffersInARow++;
|
||||||
|
if (silentBuffersInARow == 10) {
|
||||||
|
ok(true,
|
||||||
|
"MediaStreams produce silence when their input is blocked.");
|
||||||
|
sp2.onaudioprocess = null;
|
||||||
|
ac1.close();
|
||||||
|
ac2.close();
|
||||||
|
todo(false,"1");
|
||||||
|
finish();
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
is(silentBuffersInARow, 0,
|
||||||
|
"No non silent buffer inbetween silent buffers.");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
osc1.connect(mediaStreamDestination1);
|
||||||
|
|
||||||
|
mediaStreamAudioSourceNode2.connect(sp2);
|
||||||
|
osc1.start();
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// Test that there is no buffering between contexts when connecting a running
|
||||||
|
// AudioContext to a suspended AudioContext. Our ScriptProcessorNode does some
|
||||||
|
// buffering internally, so we ensure this by using a very very low frequency
|
||||||
|
// on a sine, and oberve that the phase has changed by a big enough margin.
|
||||||
|
function testMultiContextInput() {
|
||||||
|
var ac1 = new AudioContext(),
|
||||||
|
ac2 = new AudioContext();
|
||||||
|
|
||||||
|
var osc1 = ac1.createOscillator(),
|
||||||
|
mediaStreamDestination1 = ac1.createMediaStreamDestination(),
|
||||||
|
sp1 = ac1.createScriptProcessor();
|
||||||
|
|
||||||
|
var mediaStreamAudioSourceNode2 =
|
||||||
|
ac2.createMediaStreamSource(mediaStreamDestination1.stream),
|
||||||
|
sp2 = ac2.createScriptProcessor(),
|
||||||
|
resumed = false,
|
||||||
|
suspended = false,
|
||||||
|
countEventOnFirstSP = true,
|
||||||
|
eventReceived = 0;
|
||||||
|
|
||||||
|
|
||||||
|
osc1.frequency.value = 0.0001;
|
||||||
|
|
||||||
|
// We keep a first ScriptProcessor to get a periodic callback, since we can't
|
||||||
|
// use setTimeout anymore.
|
||||||
|
sp1.onaudioprocess = function(e) {
|
||||||
|
if (countEventOnFirstSP) {
|
||||||
|
eventReceived++;
|
||||||
|
}
|
||||||
|
if (eventReceived > 3 && suspended) {
|
||||||
|
countEventOnFirstSP = false;
|
||||||
|
eventReceived = 0;
|
||||||
|
ac2.resume().then(function() {
|
||||||
|
resumed = true;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sp2.onaudioprocess = function(e) {
|
||||||
|
var inputBuffer = e.inputBuffer.getChannelData(0);
|
||||||
|
if (!resumed) {
|
||||||
|
// save the last value of the buffer before suspending.
|
||||||
|
sp2.value = inputBuffer[inputBuffer.length - 1];
|
||||||
|
ac2.suspend().then(function() {
|
||||||
|
suspended = true;
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
eventReceived++;
|
||||||
|
if (eventReceived == 3) {
|
||||||
|
var delta = Math.abs(inputBuffer[1] - sp2.value),
|
||||||
|
theoreticalIncrement = 2048 * 3 * Math.PI * 2 * osc1.frequency.value / ac1.sampleRate;
|
||||||
|
ok(delta >= theoreticalIncrement,
|
||||||
|
"Buffering did not occur when the context was suspended (delta:" + delta + " increment: " + theoreticalIncrement+")");
|
||||||
|
ac1.close();
|
||||||
|
ac2.close();
|
||||||
|
sp1.onaudioprocess = null;
|
||||||
|
sp2.onaudioprocess = null;
|
||||||
|
todo(false, "2");
|
||||||
|
finish();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
osc1.connect(mediaStreamDestination1);
|
||||||
|
osc1.connect(sp1);
|
||||||
|
|
||||||
|
mediaStreamAudioSourceNode2.connect(sp2);
|
||||||
|
osc1.start();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that ScriptProcessorNode's onaudioprocess don't get called while the
|
||||||
|
// context is suspended/closed. It is possible that we get the handler called
|
||||||
|
// exactly once after suspend, because the event has already been sent to the
|
||||||
|
// event loop.
|
||||||
|
function testScriptProcessNodeSuspended() {
|
||||||
|
var ac = new AudioContext();
|
||||||
|
var sp = ac.createScriptProcessor();
|
||||||
|
var remainingIterations = 30;
|
||||||
|
var afterResume = false;
|
||||||
|
sp.onaudioprocess = function() {
|
||||||
|
ok(ac.state == "running" || remainingIterations == 3, "If onaudioprocess is called, the context" +
|
||||||
|
" must be running (was " + ac.state + ", remainingIterations:" + remainingIterations +")");
|
||||||
|
remainingIterations--;
|
||||||
|
if (!afterResume) {
|
||||||
|
if (remainingIterations == 0) {
|
||||||
|
ac.suspend().then(function() {
|
||||||
|
ac.resume().then(function() {
|
||||||
|
remainingIterations = 30;
|
||||||
|
afterResume = true;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
sp.onaudioprocess = null;
|
||||||
|
todo(false,"3");
|
||||||
|
finish();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sp.connect(ac.destination);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Take an AudioContext, make sure it switches to running when the audio starts
|
||||||
|
// flowing, and then, call suspend, resume and close on it, tracking its state.
|
||||||
|
function testAudioContext() {
|
||||||
|
var ac = new AudioContext();
|
||||||
|
is(ac.state, "suspended", "AudioContext should start in suspended state.");
|
||||||
|
var stateTracker = {
|
||||||
|
previous: ac.state,
|
||||||
|
// no promise for the initial suspended -> running
|
||||||
|
initial: { handler: false },
|
||||||
|
suspend: { promise: false, handler: false },
|
||||||
|
resume: { promise: false, handler: false },
|
||||||
|
close: { promise: false, handler: false }
|
||||||
|
};
|
||||||
|
|
||||||
|
function initialSuspendToRunning() {
|
||||||
|
ok(stateTracker.previous == "suspended" &&
|
||||||
|
ac.state == "running",
|
||||||
|
"AudioContext should switch to \"running\" when the audio hardware is" +
|
||||||
|
" ready.");
|
||||||
|
|
||||||
|
stateTracker.previous = ac.state;
|
||||||
|
ac.onstatechange = afterSuspend;
|
||||||
|
stateTracker.initial.handler = true;
|
||||||
|
|
||||||
|
ac.suspend().then(function() {
|
||||||
|
ok(!stateTracker.suspend.promise && !stateTracker.suspend.handler,
|
||||||
|
"Promise should be resolved before the callback, and only once.")
|
||||||
|
stateTracker.suspend.promise = true;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function afterSuspend() {
|
||||||
|
ok(stateTracker.previous == "running" &&
|
||||||
|
ac.state == "suspended",
|
||||||
|
"AudioContext should switch to \"suspend\" when the audio stream is" +
|
||||||
|
"suspended.");
|
||||||
|
ok(stateTracker.suspend.promise && !stateTracker.suspend.handler,
|
||||||
|
"Handler should be called after the callback, and only once");
|
||||||
|
|
||||||
|
stateTracker.suspend.handler = true;
|
||||||
|
stateTracker.previous = ac.state;
|
||||||
|
ac.onstatechange = afterResume;
|
||||||
|
|
||||||
|
ac.resume().then(function() {
|
||||||
|
ok(!stateTracker.resume.promise && !stateTracker.resume.handler,
|
||||||
|
"Promise should be called before the callback, and only once");
|
||||||
|
stateTracker.resume.promise = true;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function afterResume() {
|
||||||
|
ok(stateTracker.previous == "suspended" &&
|
||||||
|
ac.state == "running",
|
||||||
|
"AudioContext should switch to \"running\" when the audio stream resumes.");
|
||||||
|
|
||||||
|
ok(stateTracker.resume.promise && !stateTracker.resume.handler,
|
||||||
|
"Handler should be called after the callback, and only once");
|
||||||
|
|
||||||
|
stateTracker.resume.handler = true;
|
||||||
|
stateTracker.previous = ac.state;
|
||||||
|
ac.onstatechange = afterClose;
|
||||||
|
|
||||||
|
ac.close().then(function() {
|
||||||
|
ok(!stateTracker.close.promise && !stateTracker.close.handler,
|
||||||
|
"Promise should be called before the callback, and only once");
|
||||||
|
stateTracker.close.promise = true;
|
||||||
|
tryToToCreateNodeOnClosedContext(ac);
|
||||||
|
tryLegalOpeerationsOnClosedContext(ac);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function afterClose() {
|
||||||
|
ok(stateTracker.previous == "running" &&
|
||||||
|
ac.state == "closed",
|
||||||
|
"AudioContext should switch to \"closed\" when the audio stream is" +
|
||||||
|
" closed.");
|
||||||
|
ok(stateTracker.close.promise && !stateTracker.close.handler,
|
||||||
|
"Handler should be called after the callback, and only once");
|
||||||
|
}
|
||||||
|
|
||||||
|
ac.onstatechange = initialSuspendToRunning;
|
||||||
|
}
|
||||||
|
|
||||||
|
function testOfflineAudioContext() {
|
||||||
|
var o = new OfflineAudioContext(1, 44100, 44100);
|
||||||
|
is(o.state, "suspended", "OfflineAudioContext should start in suspended state.");
|
||||||
|
|
||||||
|
expectRejectedPromise(o, "suspend", "NotSupportedError");
|
||||||
|
expectRejectedPromise(o, "resume", "NotSupportedError");
|
||||||
|
expectRejectedPromise(o, "close", "NotSupportedError");
|
||||||
|
|
||||||
|
var previousState = o.state,
|
||||||
|
finishedRendering = false;
|
||||||
|
function beforeStartRendering() {
|
||||||
|
ok(previousState == "suspended" && o.state == "running", "onstatechanged" +
|
||||||
|
"handler is called on state changed, and the new state is running");
|
||||||
|
previousState = o.state;
|
||||||
|
o.onstatechange = onRenderingFinished;
|
||||||
|
}
|
||||||
|
|
||||||
|
function onRenderingFinished() {
|
||||||
|
ok(previousState == "running" && o.state == "closed",
|
||||||
|
"onstatechanged handler is called when rendering finishes, " +
|
||||||
|
"and the new state is closed");
|
||||||
|
ok(finishedRendering, "The Promise that is resolved when the rendering is" +
|
||||||
|
"done should be resolved earlier than the state change.");
|
||||||
|
previousState = o.state;
|
||||||
|
o.onstatechange = afterRenderingFinished;
|
||||||
|
|
||||||
|
tryToToCreateNodeOnClosedContext(o);
|
||||||
|
tryLegalOpeerationsOnClosedContext(o);
|
||||||
|
}
|
||||||
|
|
||||||
|
function afterRenderingFinished() {
|
||||||
|
ok(false, "There should be no transition out of the closed state.");
|
||||||
|
}
|
||||||
|
|
||||||
|
o.onstatechange = beforeStartRendering;
|
||||||
|
|
||||||
|
o.startRendering().then(function(buffer) {
|
||||||
|
finishedRendering = true;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var remaining = 0;
|
||||||
|
function finish() {
|
||||||
|
remaining--;
|
||||||
|
if (remaining == 0) {
|
||||||
|
SimpleTest.finish();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
SimpleTest.waitForExplicitFinish();
|
||||||
|
addLoadEvent(function() {
|
||||||
|
var tests = [
|
||||||
|
testAudioContext,
|
||||||
|
testOfflineAudioContext,
|
||||||
|
testScriptProcessNodeSuspended,
|
||||||
|
testMultiContextOutput,
|
||||||
|
testMultiContextInput
|
||||||
|
];
|
||||||
|
remaining = tests.length;
|
||||||
|
tests.forEach(function(f) { f() });
|
||||||
|
});
|
||||||
|
|
||||||
|
</script>
|
||||||
|
</pre>
|
||||||
|
</body>
|
||||||
|
</html>
|
|
@ -33,6 +33,18 @@ function expectTypeError(func) {
|
||||||
ok(threw, "The exception was thrown");
|
ok(threw, "The exception was thrown");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function expectRejectedPromise(that, func, exceptionName) {
|
||||||
|
var promise = that[func]();
|
||||||
|
|
||||||
|
ok(promise instanceof Promise, "Expect a Promise");
|
||||||
|
|
||||||
|
promise.then(function(res) {
|
||||||
|
ok(false, "Promise resolved when it should have been rejected.");
|
||||||
|
}).catch(function(err) {
|
||||||
|
is(err.name, exceptionName, "Promise correctly reject with " + exceptionName);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
function fuzzyCompare(a, b) {
|
function fuzzyCompare(a, b) {
|
||||||
return Math.abs(a - b) < 9e-3;
|
return Math.abs(a - b) < 9e-3;
|
||||||
}
|
}
|
||||||
|
|
|
@ -13,6 +13,12 @@
|
||||||
callback DecodeSuccessCallback = void (AudioBuffer decodedData);
|
callback DecodeSuccessCallback = void (AudioBuffer decodedData);
|
||||||
callback DecodeErrorCallback = void ();
|
callback DecodeErrorCallback = void ();
|
||||||
|
|
||||||
|
enum AudioContextState {
|
||||||
|
"suspended",
|
||||||
|
"running",
|
||||||
|
"closed"
|
||||||
|
};
|
||||||
|
|
||||||
[Constructor,
|
[Constructor,
|
||||||
Constructor(AudioChannel audioChannelType)]
|
Constructor(AudioChannel audioChannelType)]
|
||||||
interface AudioContext : EventTarget {
|
interface AudioContext : EventTarget {
|
||||||
|
@ -21,6 +27,14 @@ interface AudioContext : EventTarget {
|
||||||
readonly attribute float sampleRate;
|
readonly attribute float sampleRate;
|
||||||
readonly attribute double currentTime;
|
readonly attribute double currentTime;
|
||||||
readonly attribute AudioListener listener;
|
readonly attribute AudioListener listener;
|
||||||
|
readonly attribute AudioContextState state;
|
||||||
|
[Throws]
|
||||||
|
Promise<void> suspend();
|
||||||
|
[Throws]
|
||||||
|
Promise<void> resume();
|
||||||
|
[Throws]
|
||||||
|
Promise<void> close();
|
||||||
|
attribute EventHandler onstatechange;
|
||||||
|
|
||||||
[NewObject, Throws]
|
[NewObject, Throws]
|
||||||
AudioBuffer createBuffer(unsigned long numberOfChannels, unsigned long length, float sampleRate);
|
AudioBuffer createBuffer(unsigned long numberOfChannels, unsigned long length, float sampleRate);
|
||||||
|
@ -31,7 +45,7 @@ interface AudioContext : EventTarget {
|
||||||
optional DecodeErrorCallback errorCallback);
|
optional DecodeErrorCallback errorCallback);
|
||||||
|
|
||||||
// AudioNode creation
|
// AudioNode creation
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
AudioBufferSourceNode createBufferSource();
|
AudioBufferSourceNode createBufferSource();
|
||||||
|
|
||||||
[NewObject, Throws]
|
[NewObject, Throws]
|
||||||
|
@ -42,25 +56,25 @@ interface AudioContext : EventTarget {
|
||||||
optional unsigned long numberOfInputChannels = 2,
|
optional unsigned long numberOfInputChannels = 2,
|
||||||
optional unsigned long numberOfOutputChannels = 2);
|
optional unsigned long numberOfOutputChannels = 2);
|
||||||
|
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
StereoPannerNode createStereoPanner();
|
StereoPannerNode createStereoPanner();
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
AnalyserNode createAnalyser();
|
AnalyserNode createAnalyser();
|
||||||
[NewObject, Throws, UnsafeInPrerendering]
|
[NewObject, Throws, UnsafeInPrerendering]
|
||||||
MediaElementAudioSourceNode createMediaElementSource(HTMLMediaElement mediaElement);
|
MediaElementAudioSourceNode createMediaElementSource(HTMLMediaElement mediaElement);
|
||||||
[NewObject, Throws, UnsafeInPrerendering]
|
[NewObject, Throws, UnsafeInPrerendering]
|
||||||
MediaStreamAudioSourceNode createMediaStreamSource(MediaStream mediaStream);
|
MediaStreamAudioSourceNode createMediaStreamSource(MediaStream mediaStream);
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
GainNode createGain();
|
GainNode createGain();
|
||||||
[NewObject, Throws]
|
[NewObject, Throws]
|
||||||
DelayNode createDelay(optional double maxDelayTime = 1);
|
DelayNode createDelay(optional double maxDelayTime = 1);
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
BiquadFilterNode createBiquadFilter();
|
BiquadFilterNode createBiquadFilter();
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
WaveShaperNode createWaveShaper();
|
WaveShaperNode createWaveShaper();
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
PannerNode createPanner();
|
PannerNode createPanner();
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
ConvolverNode createConvolver();
|
ConvolverNode createConvolver();
|
||||||
|
|
||||||
[NewObject, Throws]
|
[NewObject, Throws]
|
||||||
|
@ -68,10 +82,10 @@ interface AudioContext : EventTarget {
|
||||||
[NewObject, Throws]
|
[NewObject, Throws]
|
||||||
ChannelMergerNode createChannelMerger(optional unsigned long numberOfInputs = 6);
|
ChannelMergerNode createChannelMerger(optional unsigned long numberOfInputs = 6);
|
||||||
|
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
DynamicsCompressorNode createDynamicsCompressor();
|
DynamicsCompressorNode createDynamicsCompressor();
|
||||||
|
|
||||||
[NewObject]
|
[NewObject, Throws]
|
||||||
OscillatorNode createOscillator();
|
OscillatorNode createOscillator();
|
||||||
[NewObject, Throws]
|
[NewObject, Throws]
|
||||||
PeriodicWave createPeriodicWave(Float32Array real, Float32Array imag);
|
PeriodicWave createPeriodicWave(Float32Array real, Float32Array imag);
|
||||||
|
|
Загрузка…
Ссылка в новой задаче