Examples of how to use or integrate DeepSpeech
Перейти к файлу
Alexandre Lissy 621f69674e Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
android_mic_streaming Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
batch_processing Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
electron Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
ffmpeg_vad_streaming Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
mic_vad_streaming Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
net_framework Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
nim_mic_vad_streaming Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
nodejs_mic_vad_streaming Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
nodejs_wav Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
uwp Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
vad_transcriber Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
web_microphone_websocket Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
README.rst Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00
tests.sh Rename DeepSpeech to Mozilla Voice STT 2020-08-10 17:16:40 +02:00

README.rst

Mozilla Voice STT master Examples
=================================

These are various user-contributed examples on how to use or integrate Mozilla Voice STT using our packages.

It is a good way to just try out Mozilla Voice STT before learning how it works in detail, as well as a source of inspiration for ways you can integrate it into your application or solve common tasks like voice activity detection (VAD) or microphone streaming.

Please understand that those examples are provided as-is, with no guarantee it will work in every configuration.

Contributions like fixes to existing examples or new ones are welcome!

**Note:** These examples target Mozilla Voice STT **master branch** only. If you're using a different release, you need to go to the corresponding branch for the release:

* `v0.7.x <https://github.com/mozilla/STT-examples/tree/r0.7>`_
* `v0.6.x <https://github.com/mozilla/STT-examples/tree/r0.6>`_
* `master branch <https://github.com/mozilla/STT-examples/tree/master>`_

**List of examples**

Python:
-------

* `Microphone VAD streaming  <mic_vad_streaming/README.rst>`_
* `VAD transcriber  <vad_transcriber/>`_

JavaScript:
-----------

* `FFMPEG VAD streaming  <ffmpeg_vad_streaming/README.MD>`_
* `Node.JS microphone VAD streaming <nodejs_mic_vad_streaming/Readme.md>`_
* `Node.JS wav <nodejs_wav/Readme.md>`_
* `Web Microphone Websocket streaming <web_microphone_websocket/Readme.md>`_
* `Electron wav transcriber <electron/Readme.md>`_

C#/.NET:
--------

* `.NET framework <net_framework/>`_

Java/Android:
-------------

* `mozilla/androidspeech library <https://github.com/mozilla/androidspeech/>`_