The changed device id handler expects the MediaDevicesManager and the
device id, but only the device id was given.
As the method was recursively called from the handler of the previous
"getUserMedia" promise trying to access "getUserMedia()" on a string
failed silently; the error was caught by the rejected promise handler
and the output track was stopped and removed.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The parameter given to the "ended" event handler emitted by
MediaStreamTracks is not the track itself, but an "Event" object. Due
to this the handler parameter never matched with the known tracks, so
ended tracks were not automatically removed.
In some cases (when emitted from the MediaStreamTrack due to limitations
in the Event API) the "ended" event may not contain the track that it
refers to. Therefore, rather than a single handler for all the tracks
each track needs its own handler explicitly associated with the track.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The blur value was a fixed value that did not change depending on the
input video size. Due to this the background of lower resolution videos
looked more blurry than the background of higher resolution videos, and
this was specially noticeable when the same video changed its
resolution. Now the blur is proportional to the input video size, so the
blurred background always looks the same even if the resolution changes
(provided the same aspect ratio is kept between the different
resolutions).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
When the MediaDevicesSource node is active and a media device changes
the current stream is replaced by a stream from the new selected device.
This is an asynchronous operation, so changing the stream again on
further device changes is deferred until the previous one finished.
However, when the changed device was the same as the current device
(which should not happen, although it could potentially happen with a
specific sequence of "devicechange" events emitted by the browser) the
pending request count was not cleared and thus any further device change
was ignored until the page was reloaded.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The original segmentation models used in Jitsi were relicensed by Google
from the Apache license to a proprietary one. While in theory the
previous versions should still be usable under the Apache license it
seems that they were never intended to be released as such. Therefore
the segmentation model is now replaced with the MediaPipe Selfie
Segmentation model.
Code changes are based on commit
"9b6b335c60223dc7615b308b8a25a263c7fc95eb" of repository
"https://github.com/jitsi/jitsi-meet".
"selfie_segmentation_landscape.tflite" was copied from
"mediapipe/modules/selfie_segmentation/selfie_segmentation.tflite" of
repository "https://github.com/google/mediapipe" at commit
"8b57bf879b419173b26277d220b643dac0402334".
"Model Card MediaPipe Selfie Segmentation.pdf" was downloaded from
"https://drive.google.com/file/d/1dCfozqknMa068vVsO2j_1FgZkW_e3VWv/view".
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The input track is played in a video element which, in turn, is drawn on
the canvas used to calculate the segmentation mask and on the output
canvas. When the resolution of its input track changes a video element
may still play the track using the old resolution for some frames (it
happens with real hardware cameras, although apparently not with virtual
devices), even if "track.getSettings()" already returns the new
resolution.
Due to this mismatch, when the resolution of the input track changed
(for example, due to a quality change in a call with several
participants), the video could zoom in and out, as the output canvas
size was based on the new size, while the video element drawn on it was
still using the old size.
Fortunately, the "videoWidth" and "videoHeight" attributes of the video
element seem to reflect the actual size of the video being played, so
those values can be used to do the calculations instead of relying on
the expected video size.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
When the constraints of the input stream were changed the background
blur was stopped and started again to apply the new width, height and
frame rate to the output. However, changing a stream (any stream, it is
unrelated to the background blur) causes a flickering (one or more black
frames in between), so every time that the sent video quality was
adjusted the local video flickered.
Now, instead of reseting the background blur, the internal elements are
updated and adjusted to the new constraints of the input, but the same
output stream is kept. This avoids the stream change and thus the
flickering. Note, however, that the video will temporary freeze instead
while the input stream is being re-rendered, although this should be
less annoying than the flickering.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The segmentation was always run at 30 FPS. However, if the input stream
has a lower FPS calculating the segmentation again is a waste of
resources, as it might have not changed since the previous input frame.
Moreover, as the output stream also runs at the same FPS as the input
the canvas could be drawn without any effect in the output.
As the input stream, the calculation of the segmentation and the drawing
of a new output frame are not synchronized in some cases this could
introduce some lag between the drawn segmentation and the input video.
However, due to the lack of synchronization that could happen already
(and the fixed 30 FPS only overcomed that on lower FPS inputs by sheer
luck and brute force), and the reduced load is worth that minor
annoyance. This could be solved by synchronizing the input stream and
the segmentation mask, but that comes with its own challenges.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Whenever a segmentation is calculated it is used to draw the input
stream to the output canvas with the background blurred. The
segmentation calculation was triggered at regular intervals, but it was
not taken into account whether a previous segmentation was still being
calculated. Now a new segmentation is calculated only if the previous
one finished already.
This prevents running outdated segmentations and thus should reduce the
load on low end devices in which calculating the segmentation for a
frame takes longer than the time elapsed between frames.
Besides that, if the input stream was changed the previous segmentation
could be processed when the new one had not finished loading yet, which
could led to visual glitches. This is also implicitly fixed by this
change, as the previous segmentation will be discarded if it does not
match the expected frame id.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>