TL;DR requesting a fake stream always gives you a fake stream. No magic.
The gUMConstraint `fake: true` should take precedence and if set always
use MediaEngineDefault.
If it is set the state of `faketracks` is passed
on to MediaEngineDefault.
If it is not set, but (any of) audio/video loopback devices are set, the
device enumeration will filter out only those.
--HG--
extra : commitid : ACLnd4zWe6Y
extra : rebase_source : 3fc85bb11def1d19707338baf05317d2ee216b44
This simplifies updating to a specific revision instead of
always defaulting to master. e.g.
npm install
node update-webvtt.js -d ~/vtt.js -r v0.12.1
Note the script will clobber the given repo's HEAD, checking
out the rev (or master) instead.
YouTube and WebVR have been experimenting with 8k video for
immersive applications, where you need more than 4k resolution
even on a mid-resolution display because you're not looking
at the whole scene simultaneously.
We were rejecting video frames larger than 4000x3000,
or 16k in any one dimension, to limit resource exhaustion
attacks. Bump this to accept 8k video now that there's
a demand for it.
Efficiency is proportional to stage size, so start with the largest size
possible.
--HG--
extra : rebase_source : 34915efce1eb94e18f53adf35dc939301242467a
Now, the most FFT work that happens during one realtime processing block is
when one 2048-size stage and the 256-size stage are performed at the same
phase-offset. Before FFT timing was controlled by initial input buffer offset
(bug 1221831), two 1024-size stages as well as the 512- and 256-size stages
performed FFTs at one offset. Thus, the maximum work in one block is reduced
by a ratio of about 11 to 9.
Measurements also indicate a similar reduction in total rendering thread
CPU usage.
Previously the alignment of the eleven 1024-size realtime stages was such
that, in three consecutive blocks, two 1024-size stages would peform their
FFTs. Now, the 2048-size stages is aligned so that none of these perform
their FFTs in consecutive blocks.
--HG--
extra : rebase_source : 7265374c1642661db1d4f4d630ddc8294be689c7
as with the main thread.
The comment was incomplete as ReverbConvolverStage also supports multiples of
the FFT halfsize, but only values up to WEBAUDIO_BLOCK_SIZE.
--HG--
extra : rebase_source : 34f11834dd425075e8948f47dcc5283dcb50fc42
This makes PlanarYCbCrImage abstract and moves the recycling functionality
into RecyclingPlanarYCbCrImage. This decreases the size of
SharedPlanarYCbCrImage and makes it possible for us to do part 3 of bug
1216644.
This modifies the special case code for pan == 0.0f to apply the input gain.
--HG--
extra : commitid : LAEwrqMnjQi
extra : rebase_source : 735cabd0f9bc7a857a8382c712329e8353b88ad0
This is in the mochitest suite so that Android and B2G tests can run it, but
designed so that it can be moved to web-platform-tests when they run on all
platforms.
--HG--
extra : rebase_source : 775f1d9e4122d52cd58c0d6893681d31268cb715
BufferComplexMultiply knows nothing about this format and so ends up
corrupting the DC coefficient if packed Nyquists are multiplied.
--HG--
extra : rebase_source : feccac4be8d278dc0be020185065a1a9fa596d9c
https://github.com/WebAudio/web-audio-api/issues/304
NotSupportedError is chosen for more sensible meaning and consistency with
other nodes.
--HG--
extra : rebase_source : a5b8b8af0aeb3751d299b5fe785afb9a99fe5dea
(Doing the extra ProcessBlock for the sake of downstream nodes was unnecessary
even before the inactive check was delayed until after their processing, because
downstream nodes would have only had null chunks to process anyway.)
--HG--
extra : rebase_source : d1dd8a228a23520a23e77e29ae3d5040e6505eb8
Since changes for bug 1217625, the node and downstream nodes won't be made
inactive until after downstream nodes have done their processing, and so there
is no need to wait for the first silent output block.
This essentially reverts 5c607f3f39d55544838f3111ede9e11a00d3c25e.
--HG--
extra : rebase_source : f449c427b580239f9072cc7c580585f10b69608f