This commit is contained in:
Reuben Morais 2020-08-06 14:20:39 +02:00
Родитель 2eb75b6206 0b51004081
Коммит ae9fdb183e
184 изменённых файлов: 1333 добавлений и 1497 удалений

3
.gitignore поставляемый
Просмотреть файл

@ -34,3 +34,6 @@
/doc/xml-java/
Dockerfile.build
Dockerfile.train
doc/xml-c
doc/xml-java
doc/xml-dotnet

Просмотреть файл

@ -1,5 +1,5 @@
This file contains a list of papers in chronological order that have been published
using Mozilla's DeepSpeech.
using Mozilla Voice STT.
To appear
==========

Просмотреть файл

@ -149,12 +149,12 @@ RUN bazel build \
--copt=-msse4.2 \
--copt=-mavx \
--copt=-fvisibility=hidden \
//native_client:libdeepspeech.so \
//native_client:libmozilla_voice_stt.so \
--verbose_failures \
--action_env=LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
# Copy built libs to /DeepSpeech/native_client
RUN cp bazel-bin/native_client/libdeepspeech.so /DeepSpeech/native_client/
RUN cp bazel-bin/native_client/libmozilla_voice_stt.so /DeepSpeech/native_client/
# Build client.cc and install Python client and decoder bindings
ENV TFDIR /DeepSpeech/tensorflow
@ -162,7 +162,7 @@ ENV TFDIR /DeepSpeech/tensorflow
RUN nproc
WORKDIR /DeepSpeech/native_client
RUN make NUM_PROCESSES=$(nproc) deepspeech
RUN make NUM_PROCESSES=$(nproc) mozilla_voice_stt
WORKDIR /DeepSpeech
RUN cd native_client/python && make NUM_PROCESSES=$(nproc) bindings

Просмотреть файл

@ -1,5 +1,5 @@
Project DeepSpeech
==================
Mozilla Voice STT
=================
.. image:: https://readthedocs.org/projects/deepspeech/badge/?version=latest
@ -12,7 +12,7 @@ Project DeepSpeech
:alt: Task Status
DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
Mozilla Voice STT is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Mozilla Voice STT uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
Documentation for installation, usage, and training models are available on `deepspeech.readthedocs.io <http://deepspeech.readthedocs.io/?badge=latest>`_.

Просмотреть файл

@ -1,11 +1,5 @@
DeepSpeech Model
================
The aim of this project is to create a simple, open, and ubiquitous speech
recognition engine. Simple, in that the engine should not require server-class
hardware to execute. Open, in that the code and models are released under the
Mozilla Public License. Ubiquitous, in that the engine should run on many
platforms and have bindings to many different languages.
Mozilla Voice STT Acoustic Model
================================
The architecture of the engine was originally motivated by that presented in
`Deep Speech: Scaling up end-to-end speech recognition <http://arxiv.org/abs/1412.5567>`_.
@ -77,7 +71,7 @@ with respect to all of the model parameters may be done via back-propagation
through the rest of the network. We use the Adam method for training
`[3] <http://arxiv.org/abs/1412.6980>`_.
The complete RNN model is illustrated in the figure below.
The complete LSTM model is illustrated in the figure below.
.. image:: ../images/rnn_fig-624x598.png
:alt: DeepSpeech BRNN
:alt: Mozilla Voice STT LSTM

Просмотреть файл

@ -1,12 +1,12 @@
.. _build-native-client:
Building DeepSpeech Binaries
============================
Building Mozilla Voice STT Binaries
===================================
This section describes how to rebuild binaries. We have already several prebuilt binaries for all the supported platform,
it is highly advised to use them except if you know what you are doing.
If you'd like to build the DeepSpeech binaries yourself, you'll need the following pre-requisites downloaded and installed:
If you'd like to build the Mozilla Voice STT binaries yourself, you'll need the following pre-requisites downloaded and installed:
* `Bazel 2.0.0 <https://github.com/bazelbuild/bazel/releases/tag/2.0.0>`_
* `General TensorFlow r2.2 requirements <https://www.tensorflow.org/install/source#tested_build_configurations>`_
@ -26,14 +26,14 @@ If you'd like to build the language bindings or the decoder package, you'll also
Dependencies
------------
If you follow these instructions, you should compile your own binaries of DeepSpeech (built on TensorFlow using Bazel).
If you follow these instructions, you should compile your own binaries of Mozilla Voice STT (built on TensorFlow using Bazel).
For more information on configuring TensorFlow, read the docs up to the end of `"Configure the Build" <https://www.tensorflow.org/install/source#configure_the_build>`_.
Checkout source code
^^^^^^^^^^^^^^^^^^^^
Clone DeepSpeech source code (TensorFlow will come as a submdule):
Clone Mozilla Voice STT source code (TensorFlow will come as a submdule):
.. code-block::
@ -56,24 +56,24 @@ After you have installed the correct version of Bazel, configure TensorFlow:
cd tensorflow
./configure
Compile DeepSpeech
------------------
Compile Mozilla Voice STT
-------------------------
Compile ``libdeepspeech.so``
Compile ``libmozilla_voice_stt.so``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Within your TensorFlow directory, there should be a symbolic link to the DeepSpeech ``native_client`` directory. If it is not present, create it with the follow command:
Within your TensorFlow directory, there should be a symbolic link to the Mozilla Voice STT ``native_client`` directory. If it is not present, create it with the follow command:
.. code-block::
cd tensorflow
ln -s ../native_client
You can now use Bazel to build the main DeepSpeech library, ``libdeepspeech.so``. Add ``--config=cuda`` if you want a CUDA build.
You can now use Bazel to build the main Mozilla Voice STT library, ``libmozilla_voice_stt.so``. Add ``--config=cuda`` if you want a CUDA build.
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so
The generated binaries will be saved to ``bazel-bin/native_client/``.
@ -82,12 +82,12 @@ The generated binaries will be saved to ``bazel-bin/native_client/``.
Compile ``generate_scorer_package``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Following the same setup as for ``libdeepspeech.so`` above, you can rebuild the ``generate_scorer_package`` binary by adding its target to the command line: ``//native_client:generate_scorer_package``.
Following the same setup as for ``libmozilla_voice_stt.so`` above, you can rebuild the ``generate_scorer_package`` binary by adding its target to the command line: ``//native_client:generate_scorer_package``.
Using the example from above you can build the library and that binary at the same time:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_scorer_package
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so //native_client:generate_scorer_package
The generated binaries will be saved to ``bazel-bin/native_client/``.
@ -99,7 +99,7 @@ Now, ``cd`` into the ``DeepSpeech/native_client`` directory and use the ``Makefi
.. code-block::
cd ../DeepSpeech/native_client
make deepspeech
make mozilla_voice_stt
Installing your own Binaries
----------------------------
@ -121,9 +121,9 @@ Included are a set of generated Python bindings. After following the above build
cd native_client/python
make bindings
pip install dist/deepspeech*
pip install dist/mozilla_voice_stt*
The API mirrors the C++ API and is demonstrated in `client.py <python/client.py>`_. Refer to `deepspeech.h <deepspeech.h>`_ for documentation.
The API mirrors the C++ API and is demonstrated in `client.py <python/client.py>`_. Refer to the `C API <c-usage>` for documentation.
Install NodeJS / ElectronJS bindings
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -136,7 +136,7 @@ After following the above build and installation instructions, the Node.JS bindi
make build
make npm-pack
This will create the package ``deepspeech-VERSION.tgz`` in ``native_client/javascript``.
This will create the package ``mozilla_voice_stt-VERSION.tgz`` in ``native_client/javascript``.
Install the CTC decoder package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -165,23 +165,23 @@ So your command line for ``RPi3`` and ``ARMv7`` should look like:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so
And your command line for ``LePotato`` and ``ARM64`` should look like:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so
While we test only on RPi3 Raspbian Buster and LePotato ARMBian Buster, anything compatible with ``armv7-a cortex-a53`` or ``armv8-a cortex-a53`` should be fine.
The ``deepspeech`` binary can also be cross-built, with ``TARGET=rpi3`` or ``TARGET=rpi3-armv8``. This might require you to setup a system tree using the tool ``multistrap`` and the multitrap configuration files: ``native_client/multistrap_armbian64_buster.conf`` and ``native_client/multistrap_raspbian_buster.conf``.
The ``mozilla_voice_stt`` binary can also be cross-built, with ``TARGET=rpi3`` or ``TARGET=rpi3-armv8``. This might require you to setup a system tree using the tool ``multistrap`` and the multitrap configuration files: ``native_client/multistrap_armbian64_buster.conf`` and ``native_client/multistrap_raspbian_buster.conf``.
The path of the system tree can be overridden from the default values defined in ``definitions.mk`` through the ``RASPBIAN`` ``make`` variable.
.. code-block::
cd ../DeepSpeech/native_client
make TARGET=<system> deepspeech
make TARGET=<system> mozilla_voice_stt
Android devices support
-----------------------
@ -193,9 +193,9 @@ Please refer to TensorFlow documentation on how to setup the environment to buil
Using the library from Android project
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We provide uptodate and tested ``libdeepspeech`` usable as an ``AAR`` package,
We provide up-to-date and tested STT usable as an ``AAR`` package,
for Android versions starting with 7.0 to 11.0. The package is published on
`JCenter <https://bintray.com/alissy/org.mozilla.deepspeech/libdeepspeech>`_,
`JCenter <https://bintray.com/alissy/org.mozilla.voice/stt>`_,
and the ``JCenter`` repository should be available by default in any Android
project. Please make sure your project is setup to pull from this repository.
You can then include the library by just adding this line to your
@ -203,43 +203,43 @@ You can then include the library by just adding this line to your
.. code-block::
implementation 'deepspeech.mozilla.org:libdeepspeech:VERSION@aar'
implementation 'voice.mozilla.org:stt:VERSION@aar'
Building ``libdeepspeech.so``
Building ``libmozilla_voice_stt.so``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can build the ``libdeepspeech.so`` using (ARMv7):
You can build the ``libmozilla_voice_stt.so`` using (ARMv7):
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libmozilla_voice_stt.so
Or (ARM64):
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libmozilla_voice_stt.so
Building ``libdeepspeech.aar``
Building ``libmozillavoicestt.aar``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the unlikely event you have to rebuild the JNI bindings, source code is
available under the ``libdeepspeech`` subdirectory. Building depends on shared
object: please ensure to place ``libdeepspeech.so`` into the
``libdeepspeech/libs/{arm64-v8a,armeabi-v7a,x86_64}/`` matching subdirectories.
available under the ``libmozillavoicestt`` subdirectory. Building depends on shared
object: please ensure to place ``libmozilla_voice_stt.so`` into the
``libmozillavoicestt/libs/{arm64-v8a,armeabi-v7a,x86_64}/`` matching subdirectories.
Building the bindings is managed by ``gradle`` and should be limited to issuing
``./gradlew libdeepspeech:build``, producing an ``AAR`` package in
``./libdeepspeech/build/outputs/aar/``.
``./gradlew libmozillavoicestt:build``, producing an ``AAR`` package in
``./libmozillavoicestt/build/outputs/aar/``.
Please note that you might have to copy the file to a local Maven repository
and adapt file naming (when missing, the error message should states what
filename it expects and where).
Building C++ ``deepspeech`` binary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Building C++ ``mozilla_voice_stt`` binary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Building the ``deepspeech`` binary will happen through ``ndk-build`` (ARMv7):
Building the ``mozilla_voice_stt`` binary will happen through ``ndk-build`` (ARMv7):
.. code-block::
@ -272,13 +272,13 @@ demo of one usage of the application. For example, it's only able to read PCM
mono 16kHz 16-bits file and it might fail on some WAVE file that are not
following exactly the specification.
Running ``deepspeech`` via adb
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Running ``mozilla_voice_stt`` via adb
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You should use ``adb push`` to send data to device, please refer to Android
documentation on how to use that.
Please push DeepSpeech data to ``/sdcard/deepspeech/``\ , including:
Please push Mozilla Voice STT data to ``/sdcard/mozilla_voice_stt/``\ , including:
* ``output_graph.tflite`` which is the TF Lite model
@ -286,18 +286,18 @@ Please push DeepSpeech data to ``/sdcard/deepspeech/``\ , including:
the scorer; please be aware that too big scorer will make the device run out
of memory
Then, push binaries from ``native_client.tar.xz`` to ``/data/local/tmp/ds``\ :
Then, push binaries from ``native_client.tar.xz`` to ``/data/local/tmp/stt``\ :
* ``deepspeech``
* ``libdeepspeech.so``
* ``mozilla_voice_stt``
* ``libmozilla_voice_stt.so``
* ``libc++_shared.so``
You should then be able to run as usual, using a shell from ``adb shell``\ :
.. code-block::
user@device$ cd /data/local/tmp/ds/
user@device$ LD_LIBRARY_PATH=$(pwd)/ ./deepspeech [...]
user@device$ cd /data/local/tmp/stt/
user@device$ LD_LIBRARY_PATH=$(pwd)/ ./mozilla_voice_stt [...]
Please note that Android linker does not support ``rpath`` so you have to set
``LD_LIBRARY_PATH``. Properly wrapped / packaged bindings does embed the library

Просмотреть файл

@ -10,56 +10,59 @@ C API
See also the list of error codes including descriptions for each error in :ref:`error-codes`.
.. doxygenfunction:: DS_CreateModel
.. doxygenfunction:: STT_CreateModel
:project: deepspeech-c
.. doxygenfunction:: DS_FreeModel
.. doxygenfunction:: STT_FreeModel
:project: deepspeech-c
.. doxygenfunction:: DS_EnableExternalScorer
.. doxygenfunction:: STT_EnableExternalScorer
:project: deepspeech-c
.. doxygenfunction:: DS_DisableExternalScorer
.. doxygenfunction:: STT_DisableExternalScorer
:project: deepspeech-c
.. doxygenfunction:: DS_SetScorerAlphaBeta
.. doxygenfunction:: STT_SetScorerAlphaBeta
:project: deepspeech-c
.. doxygenfunction:: DS_GetModelSampleRate
.. doxygenfunction:: STT_GetModelSampleRate
:project: deepspeech-c
.. doxygenfunction:: DS_SpeechToText
.. doxygenfunction:: STT_SpeechToText
:project: deepspeech-c
.. doxygenfunction:: DS_SpeechToTextWithMetadata
.. doxygenfunction:: STT_SpeechToTextWithMetadata
:project: deepspeech-c
.. doxygenfunction:: DS_CreateStream
.. doxygenfunction:: STT_CreateStream
:project: deepspeech-c
.. doxygenfunction:: DS_FeedAudioContent
.. doxygenfunction:: STT_FeedAudioContent
:project: deepspeech-c
.. doxygenfunction:: DS_IntermediateDecode
.. doxygenfunction:: STT_IntermediateDecode
:project: deepspeech-c
.. doxygenfunction:: DS_IntermediateDecodeWithMetadata
.. doxygenfunction:: STT_IntermediateDecodeWithMetadata
:project: deepspeech-c
.. doxygenfunction:: DS_FinishStream
.. doxygenfunction:: STT_FinishStream
:project: deepspeech-c
.. doxygenfunction:: DS_FinishStreamWithMetadata
.. doxygenfunction:: STT_FinishStreamWithMetadata
:project: deepspeech-c
.. doxygenfunction:: DS_FreeStream
.. doxygenfunction:: STT_FreeStream
:project: deepspeech-c
.. doxygenfunction:: DS_FreeMetadata
.. doxygenfunction:: STT_FreeMetadata
:project: deepspeech-c
.. doxygenfunction:: DS_FreeString
.. doxygenfunction:: STT_FreeString
:project: deepspeech-c
.. doxygenfunction:: DS_Version
.. doxygenfunction:: STT_Version
:project: deepspeech-c
.. doxygenfunction:: STT_ErrorCodeToErrorMessage
:project: deepspeech-c

Просмотреть файл

@ -6,7 +6,7 @@ CTC beam search decoder
Introduction
^^^^^^^^^^^^
DeepSpeech uses the `Connectionist Temporal Classification <http://www.cs.toronto.edu/~graves/icml_2006.pdf>`_ loss function. For an excellent explanation of CTC and its usage, see this Distill article: `Sequence Modeling with CTC <https://distill.pub/2017/ctc/>`_. This document assumes the reader is familiar with the concepts described in that article, and describes DeepSpeech specific behaviors that developers building systems with DeepSpeech should know to avoid problems.
Mozilla Voice STT uses the `Connectionist Temporal Classification <http://www.cs.toronto.edu/~graves/icml_2006.pdf>`_ loss function. For an excellent explanation of CTC and its usage, see this Distill article: `Sequence Modeling with CTC <https://distill.pub/2017/ctc/>`_. This document assumes the reader is familiar with the concepts described in that article, and describes Mozilla Voice STT specific behaviors that developers building systems with Mozilla Voice STT should know to avoid problems.
Note: Documentation for the tooling for creating custom scorer packages is available in :ref:`scorer-scripts`.
@ -16,19 +16,19 @@ The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "S
External scorer
^^^^^^^^^^^^^^^
DeepSpeech clients support OPTIONAL use of an external language model to improve the accuracy of the predicted transcripts. In the code, command line parameters, and documentation, this is referred to as a "scorer". The scorer is used to compute the likelihood (also called a score, hence the name "scorer") of sequences of words or characters in the output, to guide the decoder towards more likely results. This improves accuracy significantly.
Mozilla Voice STT clients support OPTIONAL use of an external language model to improve the accuracy of the predicted transcripts. In the code, command line parameters, and documentation, this is referred to as a "scorer". The scorer is used to compute the likelihood (also called a score, hence the name "scorer") of sequences of words or characters in the output, to guide the decoder towards more likely results. This improves accuracy significantly.
The use of an external scorer is fully optional. When an external scorer is not specified, DeepSpeech still uses a beam search decoding algorithm, but without any outside scoring.
The use of an external scorer is fully optional. When an external scorer is not specified, Mozilla Voice STT still uses a beam search decoding algorithm, but without any outside scoring.
Currently, the DeepSpeech external scorer is implemented with `KenLM <https://kheafield.com/code/kenlm/>`_, plus some tooling to package the necessary files and metadata into a single ``.scorer`` package. The tooling lives in ``data/lm/``. The scripts included in ``data/lm/`` can be used and modified to build your own language model based on your particular use case or language. See :ref:`scorer-scripts` for more details on how to reproduce our scorer file as well as create your own.
Currently, the Mozilla Voice STT external scorer is implemented with `KenLM <https://kheafield.com/code/kenlm/>`_, plus some tooling to package the necessary files and metadata into a single ``.scorer`` package. The tooling lives in ``data/lm/``. The scripts included in ``data/lm/`` can be used and modified to build your own language model based on your particular use case or language. See :ref:`scorer-scripts` for more details on how to reproduce our scorer file as well as create your own.
The scripts are geared towards replicating the language model files we release as part of `DeepSpeech model releases <https://github.com/mozilla/DeepSpeech/releases/latest>`_, but modifying them to use different datasets or language model construction parameters should be simple.
The scripts are geared towards replicating the language model files we release as part of `Mozilla Voice STT model releases <https://github.com/mozilla/DeepSpeech/releases/latest>`_, but modifying them to use different datasets or language model construction parameters should be simple.
Decoding modes
^^^^^^^^^^^^^^
DeepSpeech currently supports two modes of operation with significant differences at both training and decoding time. Note that Bytes output mode is experimental and has not been tested for languages other than Chinese Mandarin.
Mozilla Voice STT currently supports two modes of operation with significant differences at both training and decoding time. Note that Bytes output mode is experimental and has not been tested for languages other than Chinese Mandarin.
Default mode (alphabet based)

Просмотреть файл

@ -2,17 +2,17 @@
==============
DeepSpeech Class
----------------
MozillaVoiceSttModel Class
--------------------------
.. doxygenclass:: DeepSpeechClient::DeepSpeech
.. doxygenclass:: MozillaVoiceSttClient::MozillaVoiceSttModel
:project: deepspeech-dotnet
:members:
DeepSpeechStream Class
----------------------
MozillaVoiceSttStream Class
---------------------------
.. doxygenclass:: DeepSpeechClient::Models::DeepSpeechStream
.. doxygenclass:: MozillaVoiceSttClient::Models::MozillaVoiceSttStream
:project: deepspeech-dotnet
:members:
@ -21,33 +21,33 @@ ErrorCodes
See also the main definition including descriptions for each error in :ref:`error-codes`.
.. doxygenenum:: DeepSpeechClient::Enums::ErrorCodes
.. doxygenenum:: MozillaVoiceSttClient::Enums::ErrorCodes
:project: deepspeech-dotnet
Metadata
--------
.. doxygenclass:: DeepSpeechClient::Models::Metadata
.. doxygenclass:: MozillaVoiceSttClient::Models::Metadata
:project: deepspeech-dotnet
:members: Transcripts
CandidateTranscript
-------------------
.. doxygenclass:: DeepSpeechClient::Models::CandidateTranscript
.. doxygenclass:: MozillaVoiceSttClient::Models::CandidateTranscript
:project: deepspeech-dotnet
:members: Tokens, Confidence
TokenMetadata
-------------
.. doxygenclass:: DeepSpeechClient::Models::TokenMetadata
.. doxygenclass:: MozillaVoiceSttClient::Models::TokenMetadata
:project: deepspeech-dotnet
:members: Text, Timestep, StartTime
DeepSpeech Interface
--------------------
IMozillaVoiceSttModel Interface
-------------------------------
.. doxygeninterface:: DeepSpeechClient::Interfaces::IDeepSpeech
.. doxygeninterface:: MozillaVoiceSttClient::Interfaces::IMozillaVoiceSttModel
:project: deepspeech-dotnet
:members:

Просмотреть файл

@ -1,12 +1,12 @@
.NET API Usage example
======================
Examples are from `native_client/dotnet/DeepSpeechConsole/Program.cs`.
Examples are from `native_client/dotnet/MozillaVoiceSttConsole/Program.cs`.
Creating a model instance and loading model
-------------------------------------------
.. literalinclude:: ../native_client/dotnet/DeepSpeechConsole/Program.cs
.. literalinclude:: ../native_client/dotnet/MozillaVoiceSttConsole/Program.cs
:language: csharp
:linenos:
:lineno-match:
@ -16,7 +16,7 @@ Creating a model instance and loading model
Performing inference
--------------------
.. literalinclude:: ../native_client/dotnet/DeepSpeechConsole/Program.cs
.. literalinclude:: ../native_client/dotnet/MozillaVoiceSttConsole/Program.cs
:language: csharp
:linenos:
:lineno-match:
@ -26,4 +26,4 @@ Performing inference
Full source code
----------------
See :download:`Full source code<../native_client/dotnet/DeepSpeechConsole/Program.cs>`.
See :download:`Full source code<../native_client/dotnet/MozillaVoiceSttConsole/Program.cs>`.

Просмотреть файл

@ -5,7 +5,7 @@ Error codes
Below is the definition for all error codes used in the API, their numerical values, and a human readable description.
.. literalinclude:: ../native_client/deepspeech.h
.. literalinclude:: ../native_client/mozilla_voice_stt.h
:language: c
:start-after: sphinx-doc: error_code_listing_start
:end-before: sphinx-doc: error_code_listing_end

Просмотреть файл

@ -1,29 +1,29 @@
Java
====
DeepSpeechModel
---------------
MozillaVoiceSttModel
--------------------
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::DeepSpeechModel
.. doxygenclass:: org::mozilla::voice::stt::MozillaVoiceSttModel
:project: deepspeech-java
:members:
Metadata
--------
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::Metadata
.. doxygenclass:: org::mozilla::voice::stt::Metadata
:project: deepspeech-java
:members: getNumTranscripts, getTranscript
CandidateTranscript
-------------------
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::CandidateTranscript
.. doxygenclass:: org::mozilla::voice::stt::CandidateTranscript
:project: deepspeech-java
:members: getNumTokens, getConfidence, getToken
TokenMetadata
-------------
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::TokenMetadata
.. doxygenclass:: org::mozilla::voice::stt::TokenMetadata
:project: deepspeech-java
:members: getText, getTimestep, getStartTime

Просмотреть файл

@ -1,12 +1,12 @@
Java API Usage example
======================
Examples are from `native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java`.
Examples are from `native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java`.
Creating a model instance and loading model
-------------------------------------------
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java
:language: java
:linenos:
:lineno-match:
@ -16,7 +16,7 @@ Creating a model instance and loading model
Performing inference
--------------------
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java
:language: java
:linenos:
:lineno-match:
@ -26,4 +26,4 @@ Performing inference
Full source code
----------------
See :download:`Full source code<../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java>`.
See :download:`Full source code<../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java>`.

Просмотреть файл

@ -4,7 +4,7 @@
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = DeepSpeech
SPHINXPROJ = Mozilla Voice STT
SOURCEDIR = .
BUILDDIR = .build

Просмотреть файл

@ -1,8 +1,8 @@
Parallel Optimization
=====================
This is how we implement optimization of the DeepSpeech model across GPUs on a
single host. Parallel optimization can take on various forms. For example
This is how we implement optimization of the Mozilla Voice STT model across GPUs
on a single host. Parallel optimization can take on various forms. For example
one can use asynchronous updates of the model, synchronous updates of the model,
or some combination of the two.

Просмотреть файл

@ -9,61 +9,61 @@ Linux / AMD64 without GPU
^^^^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
Linux / AMD64 with GPU
^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8)
* CUDA 10.0 (and capable GPU)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
Linux / ARMv7
^^^^^^^^^^^^^
* Cortex-A53 compatible ARMv7 SoC with Neon support
* Raspbian Buster-compatible distribution
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
Linux / Aarch64
^^^^^^^^^^^^^^^
* Cortex-A72 compatible Aarch64 SoC
* ARMbian Buster-compatible distribution
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
Android / ARMv7
^^^^^^^^^^^^^^^
* ARMv7 SoC with Neon support
* Android 7.0-10.0
* NDK API level >= 21
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
Android / Aarch64
^^^^^^^^^^^^^^^^^
* Aarch64 SoC
* Android 7.0-10.0
* NDK API level >= 21
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
macOS / AMD64
^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* macOS >= 10.10
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
Windows / AMD64 without GPU
^^^^^^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Windows Server >= 2012 R2 ; Windows >= 8.1
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
Windows / AMD64 with GPU
^^^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Windows Server >= 2012 R2 ; Windows >= 8.1
* CUDA 10.0 (and capable GPU)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)

Просмотреть файл

@ -3,7 +3,7 @@
External scorer scripts
=======================
DeepSpeech pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own.
Mozilla Voice STT pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own.
The scorer is composed of two sub-components, a KenLM language model and a trie data structure containing all words in the vocabulary. In order to create the scorer package, first we must create a KenLM language model (using ``data/lm/generate_lm.py``, and then use ``generate_scorer_package`` to create the final package file including the trie data structure.
@ -59,6 +59,6 @@ Building your own scorer can be useful if you're using models in a narrow usage
The LibriSpeech LM training text used by our scorer is around 4GB uncompressed, which should give an idea of the size of a corpus needed for a reasonable language model for general speech recognition. For more constrained use cases with smaller vocabularies, you don't need as much data, but you should still try to gather as much as you can.
With a text corpus in hand, you can then re-use ``generate_lm.py`` and ``generate_scorer_package`` to create your own scorer that is compatible with DeepSpeech clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit <https://kheafield.com/code/kenlm/>`_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior.
With a text corpus in hand, you can then re-use ``generate_lm.py`` and ``generate_scorer_package`` to create your own scorer that is compatible with Mozilla Voice STT clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit <https://kheafield.com/code/kenlm/>`_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior.
After using ``generate_lm.py`` to create a KenLM language model binary file, you can use ``generate_scorer_package`` to create a scorer package as described in the previous section. Note that we have a :github:`lm_optimizer.py script <lm_optimizer.py>` which can be used to find good default values for alpha and beta. To use it, you must first generate a package with any value set for default alpha and beta flags. For this step, it doesn't matter what values you use, as they'll be overridden by ``lm_optimizer.py`` later. Then, use ``lm_optimizer.py`` with this scorer file to find good alpha and beta values. Finally, use ``generate_scorer_package`` again, this time with the new values.

Просмотреть файл

@ -12,7 +12,7 @@ Prerequisites for training a model
Getting the training code
^^^^^^^^^^^^^^^^^^^^^^^^^
Clone the DeepSpeech repository:
Clone the Mozilla Voice STT repository:
.. code-block:: bash
@ -21,25 +21,25 @@ Clone the DeepSpeech repository:
Creating a virtual environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-train-venv``. You can create it using this command:
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run Mozilla Voice STT. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/stt-train-venv``. You can create it using this command:
.. code-block::
$ python3 -m venv $HOME/tmp/deepspeech-train-venv/
$ python3 -m venv $HOME/tmp/stt-train-venv/
Once this command completes successfully, the environment will be ready to be activated.
Activating the environment
^^^^^^^^^^^^^^^^^^^^^^^^^^
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
Each time you need to work with Mozilla Voice STT, you have to *activate* this virtual environment. This is done with this simple command:
.. code-block::
$ source $HOME/tmp/deepspeech-train-venv/bin/activate
$ source $HOME/tmp/stt-train-venv/bin/activate
Installing DeepSpeech Training Code and its dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Installing Mozilla Voice STT Training Code and its dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install the required dependencies using ``pip3``\ :
@ -88,7 +88,7 @@ This should ensure that you'll re-use the upstream Python 3 TensorFlow GPU-enabl
make Dockerfile.train
If you want to specify a different DeepSpeech repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
If you want to specify a different Mozilla Voice STT repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
.. code-block:: bash
@ -105,7 +105,7 @@ After extraction of such a data set, you'll find the following contents:
* the ``*.tsv`` files output by CorporaCreator for the downloaded language
* the mp3 audio files they reference in a ``clips`` sub-directory.
For bringing this data into a form that DeepSpeech understands, you have to run the CommonVoice v2.0 importer (\ ``bin/import_cv2.py``\ ):
For bringing this data into a form that Mozilla Voice STT understands, you have to run the CommonVoice v2.0 importer (\ ``bin/import_cv2.py``\ ):
.. code-block:: bash
@ -147,7 +147,7 @@ For executing pre-configured training scenarios, there is a collection of conven
**If you experience GPU OOM errors while training, try reducing the batch size with the ``--train_batch_size``\ , ``--dev_batch_size`` and ``--test_batch_size`` parameters.**
As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout, activate the virtualenv created above, and run:
As a simple first example you can open a terminal, change to the directory of the Mozilla Voice STT checkout, activate the virtualenv created above, and run:
.. code-block:: bash
@ -157,7 +157,7 @@ This script will train on a small sample dataset composed of just a single audio
Feel also free to pass additional (or overriding) ``DeepSpeech.py`` parameters to these scripts. Then, just run the script to train the modified network.
Each dataset has a corresponding importer script in ``bin/`` that can be used to download (if it's freely available) and preprocess the dataset. See ``bin/import_librivox.py`` for an example of how to import and preprocess a large dataset for training with DeepSpeech.
Each dataset has a corresponding importer script in ``bin/`` that can be used to download (if it's freely available) and preprocess the dataset. See ``bin/import_librivox.py`` for an example of how to import and preprocess a large dataset for training with Mozilla Voice STT.
Some importers might require additional code to properly handled your locale-specific requirements. Such handling is dealt with ``--validate_label_locale`` flag that allows you to source out-of-tree Python script that defines a ``validate_label`` function. Please refer to ``util/importers.py`` for implementation example of that function.
If you don't provide this argument, the default ``validate_label`` function will be used. This one is only intended for English language, so you might have consistency issues in your data for other languages.
@ -184,7 +184,7 @@ Mixed precision training makes use of both FP32 and FP16 precisions where approp
python3 DeepSpeech.py --train_files ./train.csv --dev_files ./dev.csv --test_files ./test.csv --automatic_mixed_precision
```
On a Volta generation V100 GPU, automatic mixed precision speeds up DeepSpeech training and evaluation by ~30%-40%.
On a Volta generation V100 GPU, automatic mixed precision speeds up Mozilla Voice STT training and evaluation by ~30%-40%.
Checkpointing
^^^^^^^^^^^^^
@ -226,9 +226,9 @@ Upon sucessfull run, it should report about conversion of a non-zero number of n
Continuing training from a release model
----------------------------------------
There are currently two supported approaches to make use of a pre-trained DeepSpeech model: fine-tuning or transfer-learning. Choosing which one to use is a simple decision, and it depends on your target dataset. Does your data use the same alphabet as the release model? If "Yes": fine-tune. If "No" use transfer-learning.
There are currently two supported approaches to make use of a pre-trained Mozilla Voice STT model: fine-tuning or transfer-learning. Choosing which one to use is a simple decision, and it depends on your target dataset. Does your data use the same alphabet as the release model? If "Yes": fine-tune. If "No" use transfer-learning.
If your own data uses the *extact* same alphabet as the English release model (i.e. `a-z` plus `'`) then the release model's output layer will match your data, and you can just fine-tune the existing parameters. However, if you want to use a new alphabet (e.g. Cyrillic `а`, `б`, `д`), the output layer of a release DeepSpeech model will *not* match your data. In this case, you should use transfer-learning (i.e. remove the trained model's output layer, and reinitialize a new output layer that matches your target character set.
If your own data uses the *extact* same alphabet as the English release model (i.e. `a-z` plus `'`) then the release model's output layer will match your data, and you can just fine-tune the existing parameters. However, if you want to use a new alphabet (e.g. Cyrillic `а`, `б`, `д`), the output layer of a release Mozilla Voice STT model will *not* match your data. In this case, you should use transfer-learning (i.e. remove the trained model's output layer, and reinitialize a new output layer that matches your target character set.
N.B. - If you have access to a pre-trained model which uses UTF-8 bytes at the output layer you can always fine-tune, because any alphabet should be encodable as UTF-8.
@ -260,11 +260,11 @@ If you try to load a release model without following these steps, you'll get an
Transfer-Learning (new alphabet)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to continue training an alphabet-based DeepSpeech model (i.e. not a UTF-8 model) on a new language, or if you just want to add new characters to your custom alphabet, you will probably want to use transfer-learning instead of fine-tuning. If you're starting with a pre-trained UTF-8 model -- even if your data comes from a different language or uses a different alphabet -- the model will be able to predict your new transcripts, and you should use fine-tuning instead.
If you want to continue training an alphabet-based Mozilla Voice STT model (i.e. not a UTF-8 model) on a new language, or if you just want to add new characters to your custom alphabet, you will probably want to use transfer-learning instead of fine-tuning. If you're starting with a pre-trained UTF-8 model -- even if your data comes from a different language or uses a different alphabet -- the model will be able to predict your new transcripts, and you should use fine-tuning instead.
In a nutshell, DeepSpeech's transfer-learning allows you to remove certain layers from a pre-trained model, initialize new layers for your target data, stitch together the old and new layers, and update all layers via gradient descent. You will remove the pre-trained output layer (and optionally more layers) and reinitialize parameters to fit your target alphabet. The simplest case of transfer-learning is when you remove just the output layer.
In a nutshell, Mozilla Voice STT's transfer-learning allows you to remove certain layers from a pre-trained model, initialize new layers for your target data, stitch together the old and new layers, and update all layers via gradient descent. You will remove the pre-trained output layer (and optionally more layers) and reinitialize parameters to fit your target alphabet. The simplest case of transfer-learning is when you remove just the output layer.
In DeepSpeech's implementation of transfer-learning, all removed layers will be contiguous, starting from the output layer. The key flag you will want to experiment with is ``--drop_source_layers``. This flag accepts an integer from ``1`` to ``5`` and allows you to specify how many layers you want to remove from the pre-trained model. For example, if you supplied ``--drop_source_layers 3``, you will drop the last three layers of the pre-trained model: the output layer, penultimate layer, and LSTM layer. All dropped layers will be reinintialized, and (crucially) the output layer will be defined to match your supplied target alphabet.
In Mozilla Voice STT's implementation of transfer-learning, all removed layers will be contiguous, starting from the output layer. The key flag you will want to experiment with is ``--drop_source_layers``. This flag accepts an integer from ``1`` to ``5`` and allows you to specify how many layers you want to remove from the pre-trained model. For example, if you supplied ``--drop_source_layers 3``, you will drop the last three layers of the pre-trained model: the output layer, penultimate layer, and LSTM layer. All dropped layers will be reinintialized, and (crucially) the output layer will be defined to match your supplied target alphabet.
You need to specify the location of the pre-trained model with ``--load_checkpoint_dir`` and define where your new model checkpoints will be saved with ``--save_checkpoint_dir``. You need to specify how many layers to remove (aka "drop") from the pre-trained model: ``--drop_source_layers``. You also need to supply your new alphabet file using the standard ``--alphabet_config_path`` (remember, using a new alphabet is the whole reason you want to use transfer-learning).
@ -282,8 +282,7 @@ You need to specify the location of the pre-trained model with ``--load_checkpoi
UTF-8 mode
^^^^^^^^^^
DeepSpeech includes a UTF-8 operating mode which can be useful to model languages with very large alphabets, such as Chinese Mandarin. For details on how it works and how to use it, see :ref:`decoder-docs`.
Mozilla Voice STT includes a UTF-8 operating mode which can be useful to model languages with very large alphabets, such as Chinese Mandarin. For details on how it works and how to use it, see :ref:`decoder-docs`.
.. _training-data-augmentation:

Просмотреть файл

@ -3,7 +3,7 @@
Using a Pre-trained Model
=========================
Inference using a DeepSpeech pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed `further down in this README <#third-party-bindings>`_.
Inference using a Mozilla Voice STT pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed `further down in this README <#third-party-bindings>`_.
* :ref:`The C API <c-usage>`.
* :ref:`The Python package/language binding <py-usage>`
@ -13,7 +13,7 @@ Inference using a DeepSpeech pre-trained model can be done with a client/languag
.. _runtime-deps:
Running ``deepspeech`` might, see below, require some runtime dependencies to be already installed on your system:
Running ``mozilla_voice_stt`` might, see below, require some runtime dependencies to be already installed on your system:
* ``sox`` - The Python and Node.JS clients use SoX to resample files to 16kHz.
* ``libgomp1`` - libsox (statically linked into the clients) depends on OpenMP. Some people have had to install this manually.
@ -28,29 +28,29 @@ Please refer to your system's documentation on how to install these dependencies
CUDA dependency
^^^^^^^^^^^^^^^
The GPU capable builds (Python, NodeJS, C++, etc) depend on CUDA 10.1 and CuDNN v7.6.
The CUDA capable builds (Python, NodeJS, C++, etc) depend on CUDA 10.1 and CuDNN v7.6.
Getting the pre-trained model
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech `releases page <https://github.com/mozilla/DeepSpeech/releases>`_. Alternatively, you can run the following command to download the model files in your current directory:
If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the Mozilla Voice STT `releases page <https://github.com/mozilla/DeepSpeech/releases>`_. Alternatively, you can run the following command to download the model files in your current directory:
.. code-block:: bash
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.pbmm
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.scorer
There are several pre-trained model files available in official releases. Files ending in ``.pbmm`` are compatible with clients and language bindings built against the standard TensorFlow runtime. Usually these packages are simply called ``deepspeech``. These files are also compatible with CUDA enabled clients and language bindings. These packages are usually called ``deepspeech-gpu``. Files ending in ``.tflite`` are compatible with clients and language bindings built against the `TensorFlow Lite runtime <https://www.tensorflow.org/lite/>`_. These models are optimized for size and performance in low power devices. On desktop platforms, the compatible packages are called ``deepspeech-tflite``. On Android and Raspberry Pi, we only publish TensorFlow Lite enabled packages, and they are simply called ``deepspeech``. You can see a full list of supported platforms and which TensorFlow runtime is supported at :ref:`supported-platforms-inference`.
There are several pre-trained model files available in official releases. Files ending in ``.pbmm`` are compatible with clients and language bindings built against the standard TensorFlow runtime. Usually these packages are simply called ``mozilla_voice_stt``. These files are also compatible with CUDA enabled clients and language bindings. These packages are usually called ``mozilla_voice_stt_cuda``. Files ending in ``.tflite`` are compatible with clients and language bindings built against the `TensorFlow Lite runtime <https://www.tensorflow.org/lite/>`_. These models are optimized for size and performance in low power devices. On desktop platforms, the compatible packages are called ``mozilla_voice_stt_tflite``. On Android and Raspberry Pi, we only publish TensorFlow Lite enabled packages, and they are simply called ``mozilla_voice_stt``. You can see a full list of supported platforms and which TensorFlow runtime is supported at :ref:`supported-platforms-inference`.
+--------------------+---------------------+---------------------+
| Package/Model type | .pbmm | .tflite |
+====================+=====================+=====================+
| deepspeech | Depends on platform | Depends on platform |
+--------------------+---------------------+---------------------+
| deepspeech-gpu | ✅ | ❌ |
+--------------------+---------------------+---------------------+
| deepspeech-tflite | ❌ | ✅ |
+--------------------+---------------------+---------------------+
+--------------------------+---------------------+---------------------+
| Package/Model type | .pbmm | .tflite |
+==========================+=====================+=====================+
| mozilla_voice_stt | Depends on platform | Depends on platform |
+--------------------------+---------------------+---------------------+
| mozilla_voice_stt_cuda | ✅ | ❌ |
+--------------------------+---------------------+---------------------+
| mozilla_voice_stt_tflite | ❌ | ✅ |
+--------------------------+---------------------+---------------------+
Finally, the pre-trained model files also include files ending in ``.scorer``. These are external scorers (language models) that are used at inference time in conjunction with an acoustic model (``.pbmm`` or ``.tflite`` file) to produce transcriptions. We also provide further documentation on :ref:`the decoding process <decoder-docs>` and :ref:`how scorers are generated <scorer-scripts>`.
@ -61,82 +61,82 @@ The release notes include detailed information on how the released models were t
The process for training an acoustic model is described in :ref:`training-docs`. In particular, fine tuning a release model using your own data can be a good way to leverage relatively smaller amounts of data that would not be sufficient for training a new model from scratch. See the :ref:`fine tuning and transfer learning sections <training-fine-tuning>` for more information. :ref:`Data augmentation <training-data-augmentation>` can also be a good way to increase the value of smaller training sets.
Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in :ref:`scorer-scripts` and an overview of how the external scorer is used by DeepSpeech to perform inference is available in :ref:`decoder-docs`. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications.
Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in :ref:`scorer-scripts` and an overview of how the external scorer is used by Mozilla Voice STT to perform inference is available in :ref:`decoder-docs`. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications.
Model compatibility
^^^^^^^^^^^^^^^^^^^
DeepSpeech models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it.
Mozilla Voice STT models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it.
.. _py-usage:
Using the Python package
^^^^^^^^^^^^^^^^^^^^^^^^
Pre-built binaries which can be used for performing inference with a trained model can be installed with ``pip3``. You can then use the ``deepspeech`` binary to do speech-to-text on an audio file:
Pre-built binaries which can be used for performing inference with a trained model can be installed with ``pip3``. You can then use the ``mozilla_voice_stt`` binary to do speech-to-text on an audio file:
For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in `this documentation <http://docs.python-guide.org/en/latest/dev/virtualenvs/>`_.
We will continue under the assumption that you already have your system properly setup to create new virtual environments.
Create a DeepSpeech virtual environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create a Mozilla Voice STT virtual environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-venv``. You can create it using this command:
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run Mozilla Voice STT. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/stt-venv``. You can create it using this command:
.. code-block::
$ virtualenv -p python3 $HOME/tmp/deepspeech-venv/
$ virtualenv -p python3 $HOME/tmp/stt-venv/
Once this command completes successfully, the environment will be ready to be activated.
Activating the environment
~~~~~~~~~~~~~~~~~~~~~~~~~~
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
Each time you need to work with Mozilla Voice STT, you have to *activate* this virtual environment. This is done with this simple command:
.. code-block::
$ source $HOME/tmp/deepspeech-venv/bin/activate
$ source $HOME/tmp/stt-venv/bin/activate
Installing DeepSpeech Python bindings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Installing Mozilla Voice STT Python bindings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once your environment has been set-up and loaded, you can use ``pip3`` to manage packages locally. On a fresh setup of the ``virtualenv``\ , you will have to install the DeepSpeech wheel. You can check if ``deepspeech`` is already installed with ``pip3 list``.
Once your environment has been set-up and loaded, you can use ``pip3`` to manage packages locally. On a fresh setup of the ``virtualenv``\ , you will have to install the Mozilla Voice STT wheel. You can check if ``mozilla_voice_stt`` is already installed with ``pip3 list``.
To perform the installation, just use ``pip3`` as such:
.. code-block::
$ pip3 install deepspeech
$ pip3 install mozilla_voice_stt
If ``deepspeech`` is already installed, you can update it as such:
If ``mozilla_voice_stt`` is already installed, you can update it as such:
.. code-block::
$ pip3 install --upgrade deepspeech
$ pip3 install --upgrade mozilla_voice_stt
Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the GPU specific package as follows:
Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the CUDA specific package as follows:
.. code-block::
$ pip3 install deepspeech-gpu
$ pip3 install mozilla_voice_stt_cuda
See the `release notes <https://github.com/mozilla/DeepSpeech/releases>`_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
You can update ``deepspeech-gpu`` as follows:
You can update ``mozilla_voice_stt_cuda`` as follows:
.. code-block::
$ pip3 install --upgrade deepspeech-gpu
$ pip3 install --upgrade mozilla_voice_stt_cuda
In both cases, ``pip3`` should take care of installing all the required dependencies. After installation has finished, you should be able to call ``deepspeech`` from the command-line.
In both cases, ``pip3`` should take care of installing all the required dependencies. After installation has finished, you should be able to call ``mozilla_voice_stt`` from the command-line.
Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.
.. code-block:: bash
deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio my_audio_file.wav
mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio my_audio_file.wav
The ``--scorer`` argument is optional, and represents an external language model to be used when transcribing the audio.
@ -151,7 +151,7 @@ You can download the JS bindings using ``npm``\ :
.. code-block:: bash
npm install deepspeech
npm install mozilla_voice_stt
Please note that as of now, we support:
- Node.JS versions 4 to 13.
@ -159,11 +159,11 @@ Please note that as of now, we support:
TypeScript support is also provided.
Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the GPU specific package as follows:
Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the CUDA specific package as follows:
.. code-block:: bash
npm install deepspeech-gpu
npm install mozilla_voice_stt_cuda
See the `release notes <https://github.com/mozilla/DeepSpeech/releases>`_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
@ -174,7 +174,7 @@ See the :ref:`TypeScript client <js-api-example>` for an example of how to use t
Using the command-line client
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To download the pre-built binaries for the ``deepspeech`` command-line (compiled C++) client, use ``util/taskcluster.py``\ :
To download the pre-built binaries for the ``mozilla_voice_stt`` command-line (compiled C++) client, use ``util/taskcluster.py``\ :
.. code-block:: bash
@ -192,7 +192,7 @@ also, if you need some binaries different than current master, like ``v0.2.0-alp
python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "."
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``deepspeech`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``mozilla_voice_stt`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of Mozilla Voice STT or TensorFlow can be specified as well.
Alternatively you may manually download the ``native_client.tar.xz`` from the [releases](https://github.com/mozilla/DeepSpeech/releases).
@ -200,9 +200,9 @@ Note: the following command assumes you `downloaded the pre-trained model <#gett
.. code-block:: bash
./deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio_input.wav
./mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio_input.wav
See the help output with ``./deepspeech -h`` for more details.
See the help output with ``./mozilla_voice_stt -h`` for more details.
Installing bindings from source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -212,14 +212,14 @@ If pre-built binaries aren't available for your system, you'll need to install t
Dockerfile for building from source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We provide ``Dockerfile.build`` to automatically build ``libdeepspeech.so``, the C++ native client, Python bindings, and KenLM.
We provide ``Dockerfile.build`` to automatically build ``libmozilla_voice_stt.so``, the C++ native client, Python bindings, and KenLM.
You need to generate the Dockerfile from the template using:
.. code-block:: bash
make Dockerfile.build
If you want to specify a different DeepSpeech repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
If you want to specify a different Mozilla Voice STT repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
.. code-block:: bash

Просмотреть файл

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
#
# DeepSpeech documentation build configuration file, created by
# Mozilla Voice STT documentation build configuration file, created by
# sphinx-quickstart on Thu Feb 2 21:20:39 2017.
#
# This file is execfile()d with the current directory set to its
@ -24,7 +24,7 @@ import sys
sys.path.insert(0, os.path.abspath('../'))
autodoc_mock_imports = ['deepspeech']
autodoc_mock_imports = ['mozilla_voice_stt']
# This is in fact only relevant on ReadTheDocs, but we want to run the same way
# on our CI as in RTD to avoid regressions on RTD that we would not catch on
@ -41,7 +41,7 @@ import semver
# -- Project information -----------------------------------------------------
project = u'DeepSpeech'
project = u'Mozilla Voice STT'
copyright = '2019-2020, Mozilla Corporation'
author = 'Mozilla Corporation'
@ -143,7 +143,7 @@ html_static_path = ['.static']
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'DeepSpeechdoc'
htmlhelp_basename = 'sttdoc'
# -- Options for LaTeX output ---------------------------------------------
@ -170,7 +170,7 @@ latex_elements = {
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'DeepSpeech.tex', u'DeepSpeech Documentation',
(master_doc, 'Mozilla_Voice_STT.tex', u'Mozilla Voice STT Documentation',
u'Mozilla Research', 'manual'),
]
@ -180,7 +180,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'deepspeech', u'DeepSpeech Documentation',
(master_doc, 'mozilla_voice_stt', u'Mozilla Voice STT Documentation',
[author], 1)
]
@ -191,8 +191,8 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'DeepSpeech', u'DeepSpeech Documentation',
author, 'DeepSpeech', 'One line description of project.',
(master_doc, 'Mozilla Voice STT', u'Mozilla Voice STT Documentation',
author, 'Mozilla Voice STT', 'One line description of project.',
'Miscellaneous'),
]

Просмотреть файл

@ -790,7 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = native_client/deepspeech.h
INPUT = native_client/mozilla_voice_stt.h
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

Просмотреть файл

@ -790,7 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = native_client/dotnet/DeepSpeechClient/ native_client/dotnet/DeepSpeechClient/Interfaces/ native_client/dotnet/DeepSpeechClient/Enums/ native_client/dotnet/DeepSpeechClient/Models/
INPUT = native_client/dotnet/MozillaVoiceSttClient/ native_client/dotnet/MozillaVoiceSttClient/Interfaces/ native_client/dotnet/MozillaVoiceSttClient/Enums/ native_client/dotnet/MozillaVoiceSttClient/Models/
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

Просмотреть файл

@ -790,7 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = native_client/java/libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech/ native_client/java/libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech_doc/
INPUT = native_client/java/libmozillavoicestt/src/main/java/org/mozilla/voice/stt/ native_client/java/libmozillavoicestt/src/main/java/org/mozilla/voice/stt_doc/
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

Просмотреть файл

@ -1,23 +1,23 @@
.. DeepSpeech documentation master file, created by
.. Mozilla Voice STT documentation master file, created by
sphinx-quickstart on Thu Feb 2 21:20:39 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to DeepSpeech's documentation!
Welcome to Mozilla Voice STT's documentation!
======================================
DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
Mozilla Voice STT is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project Mozilla Voice STT uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
To install and use DeepSpeech all you have to do is:
To install and use Mozilla Voice STT all you have to do is:
.. code-block:: bash
# Create and activate a virtualenv
virtualenv -p python3 $HOME/tmp/deepspeech-venv/
source $HOME/tmp/deepspeech-venv/bin/activate
virtualenv -p python3 $HOME/tmp/stt-venv/
source $HOME/tmp/stt-venv/bin/activate
# Install DeepSpeech
pip3 install deepspeech
# Install Mozilla Voice STT
pip3 install mozilla_voice_stt
# Download pre-trained English model files
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.pbmm
@ -28,27 +28,27 @@ To install and use DeepSpeech all you have to do is:
tar xvf audio-0.7.4.tar.gz
# Transcribe an audio file
deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
A pre-trained English model is available for use and can be downloaded following the instructions in :ref:`the usage docs <usage-docs>`. For the latest release, including pre-trained models and checkpoints, `see the GitHub releases page <https://github.com/mozilla/DeepSpeech/releases/latest>`_.
Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_ to find which GPUs are supported. To run ``deepspeech`` on a GPU, install the GPU specific package:
Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_ to find which GPUs are supported. To run ``mozilla_voice_stt`` on a GPU, install the GPU specific package:
.. code-block:: bash
# Create and activate a virtualenv
virtualenv -p python3 $HOME/tmp/deepspeech-gpu-venv/
source $HOME/tmp/deepspeech-gpu-venv/bin/activate
virtualenv -p python3 $HOME/tmp/stt-gpu-venv/
source $HOME/tmp/stt-gpu-venv/bin/activate
# Install DeepSpeech CUDA enabled package
pip3 install deepspeech-gpu
# Install Mozilla Voice STT CUDA enabled package
pip3 install mozilla_voice_stt_cuda
# Transcribe an audio file.
deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
Please ensure you have the required :ref:`CUDA dependencies <cuda-deps>`.
See the output of ``deepspeech -h`` for more information on the use of ``deepspeech``. (If you experience problems running ``deepspeech``, please check :ref:`required runtime dependencies <runtime-deps>`).
See the output of ``mozilla_voice_stt -h`` for more information on the use of ``mozilla_voice_stt``. (If you experience problems running ``mozilla_voice_stt``, please check :ref:`required runtime dependencies <runtime-deps>`).
.. toctree::
:maxdepth: 2
@ -76,7 +76,7 @@ See the output of ``deepspeech -h`` for more information on the use of ``deepspe
:maxdepth: 2
:caption: Architecture and training
DeepSpeech
AcousticModel
Geometry

Просмотреть файл

@ -10,7 +10,7 @@ import csv
import os
import sys
from deepspeech import Model
from mozilla_voice_stt import Model
from deepspeech_training.util.evaluate_tools import calculate_and_print_report
from deepspeech_training.util.flags import create_flags
from functools import partial
@ -19,11 +19,8 @@ from six.moves import zip, range
r'''
This module should be self-contained:
- build libdeepspeech.so with TFLite:
- bazel build [...] --define=runtime=tflite [...] //native_client:libdeepspeech.so
- make -C native_client/python/ TFDIR=... bindings
- setup a virtualenv
- pip install native_client/python/dist/deepspeech*.whl
- pip install mozilla_voice_stt_tflite
- pip install -r requirements_eval_tflite.txt
Then run with a TF Lite model, a scorer and a CSV test file

Просмотреть файл

@ -1,6 +1,6 @@
Examples
========
DeepSpeech examples were moved to a separate repository.
Mozilla Voice STT examples were moved to a separate repository.
New location: https://github.com/mozilla/DeepSpeech-examples

Просмотреть файл

@ -1,14 +1,14 @@
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := deepspeech-prebuilt
LOCAL_SRC_FILES := $(TFDIR)/bazel-bin/native_client/libdeepspeech.so
LOCAL_MODULE := mozilla_voice_stt-prebuilt
LOCAL_SRC_FILES := $(TFDIR)/bazel-bin/native_client/libmozilla_voice_stt.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_CPP_EXTENSION := .cc .cxx .cpp
LOCAL_MODULE := deepspeech
LOCAL_MODULE := mozilla_voice_stt
LOCAL_SRC_FILES := client.cc
LOCAL_SHARED_LIBRARIES := deepspeech-prebuilt
LOCAL_SHARED_LIBRARIES := mozilla_voice_stt-prebuilt
LOCAL_LDFLAGS := -Wl,--no-as-needed
include $(BUILD_EXECUTABLE)

Просмотреть файл

@ -96,10 +96,10 @@ cc_library(
)
tf_cc_shared_object(
name = "libdeepspeech.so",
name = "libmozilla_voice_stt.so",
srcs = [
"deepspeech.cc",
"deepspeech.h",
"mozilla_voice_stt.h",
"deepspeech_errors.cc",
"modelstate.cc",
"modelstate.h",
@ -149,7 +149,7 @@ tf_cc_shared_object(
#"//tensorflow/core:all_kernels",
### => Trying to be more fine-grained
### Use bin/ops_in_graph.py to list all the ops used by a frozen graph.
### CPU only build, libdeepspeech.so file size reduced by ~50%
### CPU only build, libmozilla_voice_stt.so file size reduced by ~50%
"//tensorflow/core/kernels:spectrogram_op", # AudioSpectrogram
"//tensorflow/core/kernels:bias_op", # BiasAdd
"//tensorflow/core/kernels:cast_op", # Cast
@ -189,11 +189,11 @@ tf_cc_shared_object(
)
genrule(
name = "libdeepspeech_so_dsym",
srcs = [":libdeepspeech.so"],
outs = ["libdeepspeech.so.dSYM"],
name = "libmozilla_voice_stt_so_dsym",
srcs = [":libmozilla_voice_stt.so"],
outs = ["libmozilla_voice_stt.so.dSYM"],
output_to_bindir = True,
cmd = "dsymutil $(location :libdeepspeech.so) -o $@"
cmd = "dsymutil $(location :libmozilla_voice_stt.so) -o $@"
)
cc_binary(

Просмотреть файл

@ -1,5 +1,5 @@
This file contains some notes on coding style within the C++ portion of the
DeepSpeech project. It is very much a work in progress and incomplete.
Mozilla Voice STT project. It is very much a work in progress and incomplete.
General
=======

Просмотреть файл

@ -16,32 +16,32 @@ include definitions.mk
default: $(DEEPSPEECH_BIN)
clean:
rm -f deepspeech
rm -f $(DEEPSPEECH_BIN)
$(DEEPSPEECH_BIN): client.cc Makefile
$(CXX) $(CFLAGS) $(CFLAGS_DEEPSPEECH) $(SOX_CFLAGS) client.cc $(LDFLAGS) $(SOX_LDFLAGS)
ifeq ($(OS),Darwin)
install_name_tool -change bazel-out/local-opt/bin/native_client/libdeepspeech.so @rpath/libdeepspeech.so deepspeech
install_name_tool -change bazel-out/local-opt/bin/native_client/libmozilla_voice_stt.so @rpath/libmozilla_voice_stt.so $(DEEPSPEECH_BIN)
endif
run: $(DEEPSPEECH_BIN)
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} ./deepspeech ${ARGS}
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} ./$(DEEPSPEECH_BIN) ${ARGS}
debug: $(DEEPSPEECH_BIN)
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} gdb --args ./deepspeech ${ARGS}
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} gdb --args ./$(DEEPSPEECH_BIN) ${ARGS}
install: $(DEEPSPEECH_BIN)
install -d ${PREFIX}/lib
install -m 0644 ${TFDIR}/bazel-bin/native_client/libdeepspeech.so ${PREFIX}/lib/
install -m 0644 ${TFDIR}/bazel-bin/native_client/libmozilla_voice_stt.so ${PREFIX}/lib/
install -d ${PREFIX}/include
install -m 0644 deepspeech.h ${PREFIX}/include
install -m 0644 mozilla_voice_stt.h ${PREFIX}/include
install -d ${PREFIX}/bin
install -m 0755 deepspeech ${PREFIX}/bin/
install -m 0755 $(DEEPSPEECH_BIN) ${PREFIX}/bin/
uninstall:
rm -f ${PREFIX}/bin/deepspeech
rm -f ${PREFIX}/bin/$(DEEPSPEECH_BIN)
rmdir --ignore-fail-on-non-empty ${PREFIX}/bin
rm -f ${PREFIX}/lib/libdeepspeech.so
rm -f ${PREFIX}/lib/libmozilla_voice_stt.so
rmdir --ignore-fail-on-non-empty ${PREFIX}/lib
print-toolchain:

Просмотреть файл

@ -8,7 +8,7 @@
#endif
#include <iostream>
#include "deepspeech.h"
#include "mozilla_voice_stt.h"
char* model = NULL;
@ -43,7 +43,7 @@ void PrintHelp(const char* bin)
std::cout <<
"Usage: " << bin << " --model MODEL [--scorer SCORER] --audio AUDIO [-t] [-e]\n"
"\n"
"Running DeepSpeech inference.\n"
"Running Mozilla Voice STT inference.\n"
"\n"
"\t--model MODEL\t\t\tPath to the model (protocol buffer binary file)\n"
"\t--scorer SCORER\t\t\tPath to the external scorer file\n"
@ -58,9 +58,9 @@ void PrintHelp(const char* bin)
"\t--stream size\t\t\tRun in stream mode, output intermediate results\n"
"\t--help\t\t\t\tShow help\n"
"\t--version\t\t\tPrint version and exits\n";
char* version = DS_Version();
std::cerr << "DeepSpeech " << version << "\n";
DS_FreeString(version);
char* version = STT_Version();
std::cerr << "Mozilla Voice STT " << version << "\n";
STT_FreeString(version);
exit(1);
}
@ -153,9 +153,9 @@ bool ProcessArgs(int argc, char** argv)
}
if (has_versions) {
char* version = DS_Version();
std::cout << "DeepSpeech " << version << "\n";
DS_FreeString(version);
char* version = STT_Version();
std::cout << "Mozilla Voice STT " << version << "\n";
STT_FreeString(version);
return false;
}

Просмотреть файл

@ -34,7 +34,7 @@
#endif // NO_DIR
#include <vector>
#include "deepspeech.h"
#include "mozilla_voice_stt.h"
#include "args.h"
typedef struct {
@ -168,17 +168,17 @@ LocalDsSTT(ModelState* aCtx, const short* aBuffer, size_t aBufferSize,
// sphinx-doc: c_ref_inference_start
if (extended_output) {
Metadata *result = DS_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, 1);
Metadata *result = STT_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, 1);
res.string = CandidateTranscriptToString(&result->transcripts[0]);
DS_FreeMetadata(result);
STT_FreeMetadata(result);
} else if (json_output) {
Metadata *result = DS_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, json_candidate_transcripts);
Metadata *result = STT_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, json_candidate_transcripts);
res.string = MetadataToJSON(result);
DS_FreeMetadata(result);
STT_FreeMetadata(result);
} else if (stream_size > 0) {
StreamingState* ctx;
int status = DS_CreateStream(aCtx, &ctx);
if (status != DS_ERR_OK) {
int status = STT_CreateStream(aCtx, &ctx);
if (status != STT_ERR_OK) {
res.string = strdup("");
return res;
}
@ -186,22 +186,22 @@ LocalDsSTT(ModelState* aCtx, const short* aBuffer, size_t aBufferSize,
const char *last = nullptr;
while (off < aBufferSize) {
size_t cur = aBufferSize - off > stream_size ? stream_size : aBufferSize - off;
DS_FeedAudioContent(ctx, aBuffer + off, cur);
STT_FeedAudioContent(ctx, aBuffer + off, cur);
off += cur;
const char* partial = DS_IntermediateDecode(ctx);
const char* partial = STT_IntermediateDecode(ctx);
if (last == nullptr || strcmp(last, partial)) {
printf("%s\n", partial);
last = partial;
} else {
DS_FreeString((char *) partial);
STT_FreeString((char *) partial);
}
}
if (last != nullptr) {
DS_FreeString((char *) last);
STT_FreeString((char *) last);
}
res.string = DS_FinishStream(ctx);
res.string = STT_FinishStream(ctx);
} else {
res.string = DS_SpeechToText(aCtx, aBuffer, aBufferSize);
res.string = STT_SpeechToText(aCtx, aBuffer, aBufferSize);
}
// sphinx-doc: c_ref_inference_stop
@ -367,7 +367,7 @@ GetAudioBuffer(const char* path, int desired_sample_rate)
void
ProcessFile(ModelState* context, const char* path, bool show_times)
{
ds_audio_buffer audio = GetAudioBuffer(path, DS_GetModelSampleRate(context));
ds_audio_buffer audio = GetAudioBuffer(path, STT_GetModelSampleRate(context));
// Pass audio to DeepSpeech
// We take half of buffer_size because buffer is a char* while
@ -381,7 +381,7 @@ ProcessFile(ModelState* context, const char* path, bool show_times)
if (result.string) {
printf("%s\n", result.string);
DS_FreeString((char*)result.string);
STT_FreeString((char*)result.string);
}
if (show_times) {
@ -400,16 +400,16 @@ main(int argc, char **argv)
// Initialise DeepSpeech
ModelState* ctx;
// sphinx-doc: c_ref_model_start
int status = DS_CreateModel(model, &ctx);
int status = STT_CreateModel(model, &ctx);
if (status != 0) {
char* error = DS_ErrorCodeToErrorMessage(status);
char* error = STT_ErrorCodeToErrorMessage(status);
fprintf(stderr, "Could not create model: %s\n", error);
free(error);
return 1;
}
if (set_beamwidth) {
status = DS_SetModelBeamWidth(ctx, beam_width);
status = STT_SetModelBeamWidth(ctx, beam_width);
if (status != 0) {
fprintf(stderr, "Could not set model beam width.\n");
return 1;
@ -417,13 +417,13 @@ main(int argc, char **argv)
}
if (scorer) {
status = DS_EnableExternalScorer(ctx, scorer);
status = STT_EnableExternalScorer(ctx, scorer);
if (status != 0) {
fprintf(stderr, "Could not enable external scorer.\n");
return 1;
}
if (set_alphabeta) {
status = DS_SetScorerAlphaBeta(ctx, lm_alpha, lm_beta);
status = STT_SetScorerAlphaBeta(ctx, lm_alpha, lm_beta);
if (status != 0) {
fprintf(stderr, "Error setting scorer alpha and beta.\n");
return 1;
@ -485,7 +485,7 @@ main(int argc, char **argv)
sox_quit();
#endif // NO_SOX
DS_FreeModel(ctx);
STT_FreeModel(ctx);
return 0;
}

Просмотреть файл

@ -10,7 +10,7 @@ __version__ = swigwrapper.__version__.decode('utf-8')
# Hack: import error codes by matching on their names, as SWIG unfortunately
# does not support binding enums to Python in a scoped manner yet.
for symbol in dir(swigwrapper):
if symbol.startswith('DS_ERR_'):
if symbol.startswith('STT_ERR_'):
globals()[symbol] = getattr(swigwrapper, symbol)
class Scorer(swigwrapper.Scorer):

Просмотреть файл

@ -74,13 +74,13 @@ int Scorer::load_lm(const std::string& lm_path)
// Check if file is readable to avoid KenLM throwing an exception
const char* filename = lm_path.c_str();
if (access(filename, R_OK) != 0) {
return DS_ERR_SCORER_UNREADABLE;
return STT_ERR_SCORER_UNREADABLE;
}
// Check if the file format is valid to avoid KenLM throwing an exception
lm::ngram::ModelType model_type;
if (!lm::ngram::RecognizeBinary(filename, model_type)) {
return DS_ERR_SCORER_INVALID_LM;
return STT_ERR_SCORER_INVALID_LM;
}
// Load the LM
@ -97,7 +97,7 @@ int Scorer::load_lm(const std::string& lm_path)
uint64_t trie_offset = language_model_->GetEndOfSearchOffset();
if (package_size <= trie_offset) {
// File ends without a trie structure
return DS_ERR_SCORER_NO_TRIE;
return STT_ERR_SCORER_NO_TRIE;
}
// Read metadata and trie from file
@ -113,7 +113,7 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path)
if (magic != MAGIC) {
std::cerr << "Error: Can't parse scorer file, invalid header. Try updating "
"your scorer file." << std::endl;
return DS_ERR_SCORER_INVALID_TRIE;
return STT_ERR_SCORER_INVALID_TRIE;
}
int version;
@ -125,10 +125,10 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path)
if (version < FILE_VERSION) {
std::cerr << "Update your scorer file.";
} else {
std::cerr << "Downgrade your scorer file or update your version of DeepSpeech.";
std::cerr << "Downgrade your scorer file or update your version of Mozilla Voice STT.";
}
std::cerr << std::endl;
return DS_ERR_SCORER_VERSION_MISMATCH;
return STT_ERR_SCORER_VERSION_MISMATCH;
}
fin.read(reinterpret_cast<char*>(&is_utf8_mode_), sizeof(is_utf8_mode_));
@ -143,7 +143,7 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path)
opt.mode = fst::FstReadOptions::MAP;
opt.source = file_path;
dictionary.reset(FstType::Read(fin, opt));
return DS_ERR_OK;
return STT_ERR_OK;
}
bool Scorer::save_dictionary(const std::string& path, bool append_instead_of_overwrite)

Просмотреть файл

@ -13,7 +13,7 @@
#include "path_trie.h"
#include "alphabet.h"
#include "deepspeech.h"
#include "mozilla_voice_stt.h"
const double OOV_SCORE = -1000.0;
const std::string START_TOKEN = "<s>";

Просмотреть файл

@ -42,14 +42,14 @@ namespace std {
%constant const char* __version__ = ds_version();
%constant const char* __git_version__ = ds_git_version();
// Import only the error code enum definitions from deepspeech.h
// Import only the error code enum definitions from mozilla_voice_stt.h
// We can't just do |%ignore "";| here because it affects this file globally (even
// files %include'd above). That causes SWIG to lose destructor information and
// leads to leaks of the wrapper objects.
// Instead we ignore functions and classes (structs), which are the only other
// things in deepspeech.h. If we add some new construct to deepspeech.h we need
// things in mozilla_voice_stt.h. If we add some new construct to mozilla_voice_stt.h we need
// to update the ignore rules here to avoid exposing unwanted APIs in the decoder
// package.
%rename("$ignore", %$isfunction) "";
%rename("$ignore", %$isclass) "";
%include "../deepspeech.h"
%include "../mozilla_voice_stt.h"

Просмотреть файл

@ -9,7 +9,7 @@
#include <utility>
#include <vector>
#include "deepspeech.h"
#include "mozilla_voice_stt.h"
#include "alphabet.h"
#include "modelstate.h"
@ -25,7 +25,7 @@
#ifdef __ANDROID__
#include <android/log.h>
#define LOG_TAG "libdeepspeech"
#define LOG_TAG "libmozilla_voice_stt"
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
#else
@ -263,23 +263,23 @@ StreamingState::processBatch(const vector<float>& buf, unsigned int n_steps)
}
int
DS_CreateModel(const char* aModelPath,
STT_CreateModel(const char* aModelPath,
ModelState** retval)
{
*retval = nullptr;
std::cerr << "TensorFlow: " << tf_local_git_version() << std::endl;
std::cerr << "DeepSpeech: " << ds_git_version() << std::endl;
std::cerr << "Mozilla Voice STT: " << ds_git_version() << std::endl;
#ifdef __ANDROID__
LOGE("TensorFlow: %s", tf_local_git_version());
LOGD("TensorFlow: %s", tf_local_git_version());
LOGE("DeepSpeech: %s", ds_git_version());
LOGD("DeepSpeech: %s", ds_git_version());
LOGE("Mozilla Voice STT: %s", ds_git_version());
LOGD("Mozilla Voice STT: %s", ds_git_version());
#endif
if (!aModelPath || strlen(aModelPath) < 1) {
std::cerr << "No model specified, cannot continue." << std::endl;
return DS_ERR_NO_MODEL;
return STT_ERR_NO_MODEL;
}
std::unique_ptr<ModelState> model(
@ -292,79 +292,79 @@ DS_CreateModel(const char* aModelPath,
if (!model) {
std::cerr << "Could not allocate model state." << std::endl;
return DS_ERR_FAIL_CREATE_MODEL;
return STT_ERR_FAIL_CREATE_MODEL;
}
int err = model->init(aModelPath);
if (err != DS_ERR_OK) {
if (err != STT_ERR_OK) {
return err;
}
*retval = model.release();
return DS_ERR_OK;
return STT_ERR_OK;
}
unsigned int
DS_GetModelBeamWidth(const ModelState* aCtx)
STT_GetModelBeamWidth(const ModelState* aCtx)
{
return aCtx->beam_width_;
}
int
DS_SetModelBeamWidth(ModelState* aCtx, unsigned int aBeamWidth)
STT_SetModelBeamWidth(ModelState* aCtx, unsigned int aBeamWidth)
{
aCtx->beam_width_ = aBeamWidth;
return 0;
}
int
DS_GetModelSampleRate(const ModelState* aCtx)
STT_GetModelSampleRate(const ModelState* aCtx)
{
return aCtx->sample_rate_;
}
void
DS_FreeModel(ModelState* ctx)
STT_FreeModel(ModelState* ctx)
{
delete ctx;
}
int
DS_EnableExternalScorer(ModelState* aCtx,
STT_EnableExternalScorer(ModelState* aCtx,
const char* aScorerPath)
{
std::unique_ptr<Scorer> scorer(new Scorer());
int err = scorer->init(aScorerPath, aCtx->alphabet_);
if (err != 0) {
return DS_ERR_INVALID_SCORER;
return STT_ERR_INVALID_SCORER;
}
aCtx->scorer_ = std::move(scorer);
return DS_ERR_OK;
return STT_ERR_OK;
}
int
DS_DisableExternalScorer(ModelState* aCtx)
STT_DisableExternalScorer(ModelState* aCtx)
{
if (aCtx->scorer_) {
aCtx->scorer_.reset();
return DS_ERR_OK;
return STT_ERR_OK;
}
return DS_ERR_SCORER_NOT_ENABLED;
return STT_ERR_SCORER_NOT_ENABLED;
}
int DS_SetScorerAlphaBeta(ModelState* aCtx,
int STT_SetScorerAlphaBeta(ModelState* aCtx,
float aAlpha,
float aBeta)
{
if (aCtx->scorer_) {
aCtx->scorer_->reset_params(aAlpha, aBeta);
return DS_ERR_OK;
return STT_ERR_OK;
}
return DS_ERR_SCORER_NOT_ENABLED;
return STT_ERR_SCORER_NOT_ENABLED;
}
int
DS_CreateStream(ModelState* aCtx,
STT_CreateStream(ModelState* aCtx,
StreamingState** retval)
{
*retval = nullptr;
@ -372,7 +372,7 @@ DS_CreateStream(ModelState* aCtx,
std::unique_ptr<StreamingState> ctx(new StreamingState());
if (!ctx) {
std::cerr << "Could not allocate streaming state." << std::endl;
return DS_ERR_FAIL_CREATE_STREAM;
return STT_ERR_FAIL_CREATE_STREAM;
}
ctx->audio_buffer_.reserve(aCtx->audio_win_len_);
@ -393,11 +393,11 @@ DS_CreateStream(ModelState* aCtx,
aCtx->scorer_);
*retval = ctx.release();
return DS_ERR_OK;
return STT_ERR_OK;
}
void
DS_FeedAudioContent(StreamingState* aSctx,
STT_FeedAudioContent(StreamingState* aSctx,
const short* aBuffer,
unsigned int aBufferSize)
{
@ -405,32 +405,32 @@ DS_FeedAudioContent(StreamingState* aSctx,
}
char*
DS_IntermediateDecode(const StreamingState* aSctx)
STT_IntermediateDecode(const StreamingState* aSctx)
{
return aSctx->intermediateDecode();
}
Metadata*
DS_IntermediateDecodeWithMetadata(const StreamingState* aSctx,
STT_IntermediateDecodeWithMetadata(const StreamingState* aSctx,
unsigned int aNumResults)
{
return aSctx->intermediateDecodeWithMetadata(aNumResults);
}
char*
DS_FinishStream(StreamingState* aSctx)
STT_FinishStream(StreamingState* aSctx)
{
char* str = aSctx->finishStream();
DS_FreeStream(aSctx);
STT_FreeStream(aSctx);
return str;
}
Metadata*
DS_FinishStreamWithMetadata(StreamingState* aSctx,
STT_FinishStreamWithMetadata(StreamingState* aSctx,
unsigned int aNumResults)
{
Metadata* result = aSctx->finishStreamWithMetadata(aNumResults);
DS_FreeStream(aSctx);
STT_FreeStream(aSctx);
return result;
}
@ -440,41 +440,41 @@ CreateStreamAndFeedAudioContent(ModelState* aCtx,
unsigned int aBufferSize)
{
StreamingState* ctx;
int status = DS_CreateStream(aCtx, &ctx);
if (status != DS_ERR_OK) {
int status = STT_CreateStream(aCtx, &ctx);
if (status != STT_ERR_OK) {
return nullptr;
}
DS_FeedAudioContent(ctx, aBuffer, aBufferSize);
STT_FeedAudioContent(ctx, aBuffer, aBufferSize);
return ctx;
}
char*
DS_SpeechToText(ModelState* aCtx,
STT_SpeechToText(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize)
{
StreamingState* ctx = CreateStreamAndFeedAudioContent(aCtx, aBuffer, aBufferSize);
return DS_FinishStream(ctx);
return STT_FinishStream(ctx);
}
Metadata*
DS_SpeechToTextWithMetadata(ModelState* aCtx,
STT_SpeechToTextWithMetadata(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize,
unsigned int aNumResults)
{
StreamingState* ctx = CreateStreamAndFeedAudioContent(aCtx, aBuffer, aBufferSize);
return DS_FinishStreamWithMetadata(ctx, aNumResults);
return STT_FinishStreamWithMetadata(ctx, aNumResults);
}
void
DS_FreeStream(StreamingState* aSctx)
STT_FreeStream(StreamingState* aSctx)
{
delete aSctx;
}
void
DS_FreeMetadata(Metadata* m)
STT_FreeMetadata(Metadata* m)
{
if (m) {
for (int i = 0; i < m->num_transcripts; ++i) {
@ -491,13 +491,13 @@ DS_FreeMetadata(Metadata* m)
}
void
DS_FreeString(char* str)
STT_FreeString(char* str)
{
free(str);
}
char*
DS_Version()
STT_Version()
{
return strdup(ds_version());
}

Просмотреть файл

@ -1,8 +1,8 @@
#include "deepspeech.h"
#include "mozilla_voice_stt.h"
#include <string.h>
char*
DS_ErrorCodeToErrorMessage(int aErrorCode)
STT_ErrorCodeToErrorMessage(int aErrorCode)
{
#define RETURN_MESSAGE(NAME, VALUE, DESC) \
case NAME: \
@ -10,7 +10,7 @@ DS_ErrorCodeToErrorMessage(int aErrorCode)
switch(aErrorCode)
{
DS_FOR_EACH_ERROR(RETURN_MESSAGE)
STT_FOR_EACH_ERROR(RETURN_MESSAGE)
default:
return strdup("Unknown error, please make sure you are using the correct native binary.");
}

Просмотреть файл

@ -18,9 +18,9 @@ ifeq ($(findstring _NT,$(OS)),_NT)
PLATFORM_EXE_SUFFIX := .exe
endif
DEEPSPEECH_BIN := deepspeech$(PLATFORM_EXE_SUFFIX)
DEEPSPEECH_BIN := mozilla_voice_stt$(PLATFORM_EXE_SUFFIX)
CFLAGS_DEEPSPEECH := -std=c++11 -o $(DEEPSPEECH_BIN)
LINK_DEEPSPEECH := -ldeepspeech
LINK_DEEPSPEECH := -lmozilla_voice_stt
LINK_PATH_DEEPSPEECH := -L${TFDIR}/bazel-bin/native_client
ifeq ($(TARGET),host)
@ -53,7 +53,7 @@ TOOL_CC := cl.exe
TOOL_CXX := cl.exe
TOOL_LD := link.exe
TOOL_LIBEXE := lib.exe
LINK_DEEPSPEECH := $(TFDIR)\bazel-bin\native_client\libdeepspeech.so.if.lib
LINK_DEEPSPEECH := $(TFDIR)\bazel-bin\native_client\libmozilla_voice_stt.so.if.lib
LINK_PATH_DEEPSPEECH :=
CFLAGS_DEEPSPEECH := -nologo -Fe$(DEEPSPEECH_BIN)
SOX_CFLAGS :=
@ -174,7 +174,7 @@ define copy_missing_libs
new_missing="$$( (for f in $$(otool -L $$lib 2>/dev/null | tail -n +2 | awk '{ print $$1 }' | grep -v '$$lib'); do ls -hal $$f; done;) 2>&1 | grep 'No such' | cut -d':' -f2 | xargs basename -a)"; \
missing_libs="$$missing_libs $$new_missing"; \
elif [ "$(OS)" = "${TC_MSYS_VERSION}" ]; then \
missing_libs="libdeepspeech.so"; \
missing_libs="libmozilla_voice_stt.so"; \
else \
missing_libs="$$missing_libs $$($(LDD) $$lib | grep 'not found' | awk '{ print $$1 }')"; \
fi; \

Просмотреть файл

@ -1,30 +0,0 @@
namespace DeepSpeechClient.Enums
{
/// <summary>
/// Error codes from the native DeepSpeech binary.
/// </summary>
internal enum ErrorCodes
{
// OK
DS_ERR_OK = 0x0000,
// Missing invormations
DS_ERR_NO_MODEL = 0x1000,
// Invalid parameters
DS_ERR_INVALID_ALPHABET = 0x2000,
DS_ERR_INVALID_SHAPE = 0x2001,
DS_ERR_INVALID_SCORER = 0x2002,
DS_ERR_MODEL_INCOMPATIBLE = 0x2003,
DS_ERR_SCORER_NOT_ENABLED = 0x2004,
// Runtime failures
DS_ERR_FAIL_INIT_MMAP = 0x3000,
DS_ERR_FAIL_INIT_SESS = 0x3001,
DS_ERR_FAIL_INTERPRETER = 0x3002,
DS_ERR_FAIL_RUN_SESS = 0x3003,
DS_ERR_FAIL_CREATE_STREAM = 0x3004,
DS_ERR_FAIL_READ_PROTOBUF = 0x3005,
DS_ERR_FAIL_CREATE_SESS = 0x3006,
}
}

Просмотреть файл

@ -1,102 +0,0 @@
using DeepSpeechClient.Enums;
using System;
using System.Runtime.InteropServices;
namespace DeepSpeechClient
{
/// <summary>
/// Wrapper for the native implementation of "libdeepspeech.so"
/// </summary>
internal static class NativeImp
{
#region Native Implementation
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static extern IntPtr DS_Version();
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes DS_CreateModel(string aModelPath,
ref IntPtr** pint);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern IntPtr DS_ErrorCodeToErrorMessage(int aErrorCode);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern uint DS_GetModelBeamWidth(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes DS_SetModelBeamWidth(IntPtr** aCtx,
uint aBeamWidth);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes DS_CreateModel(string aModelPath,
uint aBeamWidth,
ref IntPtr** pint);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern int DS_GetModelSampleRate(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_EnableExternalScorer(IntPtr** aCtx,
string aScorerPath);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_DisableExternalScorer(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_SetScorerAlphaBeta(IntPtr** aCtx,
float aAlpha,
float aBeta);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr DS_SpeechToText(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl, SetLastError = true)]
internal static unsafe extern IntPtr DS_SpeechToTextWithMetadata(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize,
uint aNumResults);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeModel(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_CreateStream(IntPtr** aCtx,
ref IntPtr** retval);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeStream(IntPtr** aSctx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeMetadata(IntPtr metadata);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeString(IntPtr str);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern void DS_FeedAudioContent(IntPtr** aSctx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr DS_IntermediateDecode(IntPtr** aSctx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr DS_IntermediateDecodeWithMetadata(IntPtr** aSctx,
uint aNumResults);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr DS_FinishStream(IntPtr** aSctx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr DS_FinishStreamWithMetadata(IntPtr** aSctx,
uint aNumResults);
#endregion
}
}

Просмотреть файл

@ -2,9 +2,9 @@ Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.30204.135
MinimumVisualStudioVersion = 10.0.40219.1
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DeepSpeechClient", "DeepSpeechClient\DeepSpeechClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "MozillaVoiceSttClient", "MozillaVoiceSttClient\MozillaVoiceSttClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeechConsole", "DeepSpeechConsole\DeepSpeechConsole.csproj", "{312965E5-C4F6-4D95-BA64-79906B8BC7AC}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceSttConsole", "MozillaVoiceSttConsole\MozillaVoiceSttConsole.csproj", "{312965E5-C4F6-4D95-BA64-79906B8BC7AC}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution

Просмотреть файл

@ -0,0 +1,29 @@
namespace MozillaVoiceSttClient.Enums
{
/// <summary>
/// Error codes from the native Mozilla Voice STT binary.
/// </summary>
internal enum ErrorCodes
{
STT_ERR_OK = 0x0000,
STT_ERR_NO_MODEL = 0x1000,
STT_ERR_INVALID_ALPHABET = 0x2000,
STT_ERR_INVALID_SHAPE = 0x2001,
STT_ERR_INVALID_SCORER = 0x2002,
STT_ERR_MODEL_INCOMPATIBLE = 0x2003,
STT_ERR_SCORER_NOT_ENABLED = 0x2004,
STT_ERR_SCORER_UNREADABLE = 0x2005,
STT_ERR_SCORER_INVALID_LM = 0x2006,
STT_ERR_SCORER_NO_TRIE = 0x2007,
STT_ERR_SCORER_INVALID_TRIE = 0x2008,
STT_ERR_SCORER_VERSION_MISMATCH = 0x2009,
STT_ERR_FAIL_INIT_MMAP = 0x3000,
STT_ERR_FAIL_INIT_SESS = 0x3001,
STT_ERR_FAIL_INTERPRETER = 0x3002,
STT_ERR_FAIL_RUN_SESS = 0x3003,
STT_ERR_FAIL_CREATE_STREAM = 0x3004,
STT_ERR_FAIL_READ_PROTOBUF = 0x3005,
STT_ERR_FAIL_CREATE_SESS = 0x3006,
STT_ERR_FAIL_CREATE_MODEL = 0x3007,
}
}

Просмотреть файл

@ -1,9 +1,9 @@
using DeepSpeechClient.Structs;
using MozillaVoiceSttClient.Structs;
using System;
using System.Runtime.InteropServices;
using System.Text;
namespace DeepSpeechClient.Extensions
namespace MozillaVoiceSttClient.Extensions
{
internal static class NativeExtensions
{
@ -20,7 +20,7 @@ namespace DeepSpeechClient.Extensions
byte[] buffer = new byte[len];
Marshal.Copy(intPtr, buffer, 0, buffer.Length);
if (releasePtr)
NativeImp.DS_FreeString(intPtr);
NativeImp.STT_FreeString(intPtr);
string result = Encoding.UTF8.GetString(buffer);
return result;
}
@ -86,7 +86,7 @@ namespace DeepSpeechClient.Extensions
metadata.transcripts += sizeOfCandidateTranscript;
}
NativeImp.DS_FreeMetadata(intPtr);
NativeImp.STT_FreeMetadata(intPtr);
return managedMetadata;
}
}

Просмотреть файл

@ -1,13 +1,13 @@
using DeepSpeechClient.Models;
using MozillaVoiceSttClient.Models;
using System;
using System.IO;
namespace DeepSpeechClient.Interfaces
namespace MozillaVoiceSttClient.Interfaces
{
/// <summary>
/// Client interface of Mozilla's DeepSpeech implementation.
/// Client interface of Mozilla Voice STT.
/// </summary>
public interface IDeepSpeech : IDisposable
public interface IMozillaVoiceSttModel : IDisposable
{
/// <summary>
/// Return version of this library. The returned version is a semantic version
@ -59,7 +59,7 @@ namespace DeepSpeechClient.Interfaces
unsafe void SetScorerAlphaBeta(float aAlpha, float aBeta);
/// <summary>
/// Use the DeepSpeech model to perform Speech-To-Text.
/// Use the Mozilla Voice STT model to perform Speech-To-Text.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
@ -68,7 +68,7 @@ namespace DeepSpeechClient.Interfaces
uint aBufferSize);
/// <summary>
/// Use the DeepSpeech model to perform Speech-To-Text, return results including metadata.
/// Use the Mozilla Voice STT model to perform Speech-To-Text, return results including metadata.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
@ -83,26 +83,26 @@ namespace DeepSpeechClient.Interfaces
/// This can be used if you no longer need the result of an ongoing streaming
/// inference and don't want to perform a costly decode operation.
/// </summary>
unsafe void FreeStream(DeepSpeechStream stream);
unsafe void FreeStream(MozillaVoiceSttStream stream);
/// <summary>
/// Creates a new streaming inference state.
/// </summary>
unsafe DeepSpeechStream CreateStream();
unsafe MozillaVoiceSttStream CreateStream();
/// <summary>
/// Feeds audio samples to an ongoing streaming inference.
/// </summary>
/// <param name="stream">Instance of the stream to feed the data.</param>
/// <param name="aBuffer">An array of 16-bit, mono raw audio samples at the appropriate sample rate (matching what the model was trained on).</param>
unsafe void FeedAudioContent(DeepSpeechStream stream, short[] aBuffer, uint aBufferSize);
unsafe void FeedAudioContent(MozillaVoiceSttStream stream, short[] aBuffer, uint aBufferSize);
/// <summary>
/// Computes the intermediate decoding of an ongoing streaming inference.
/// </summary>
/// <param name="stream">Instance of the stream to decode.</param>
/// <returns>The STT intermediate result.</returns>
unsafe string IntermediateDecode(DeepSpeechStream stream);
unsafe string IntermediateDecode(MozillaVoiceSttStream stream);
/// <summary>
/// Computes the intermediate decoding of an ongoing streaming inference, including metadata.
@ -110,14 +110,14 @@ namespace DeepSpeechClient.Interfaces
/// <param name="stream">Instance of the stream to decode.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The extended metadata result.</returns>
unsafe Metadata IntermediateDecodeWithMetadata(DeepSpeechStream stream, uint aNumResults);
unsafe Metadata IntermediateDecodeWithMetadata(MozillaVoiceSttStream stream, uint aNumResults);
/// <summary>
/// Closes the ongoing streaming inference, returns the STT result over the whole audio signal.
/// </summary>
/// <param name="stream">Instance of the stream to finish.</param>
/// <returns>The STT result.</returns>
unsafe string FinishStream(DeepSpeechStream stream);
unsafe string FinishStream(MozillaVoiceSttStream stream);
/// <summary>
/// Closes the ongoing streaming inference, returns the STT result over the whole audio signal, including metadata.
@ -125,6 +125,6 @@ namespace DeepSpeechClient.Interfaces
/// <param name="stream">Instance of the stream to finish.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The extended metadata result.</returns>
unsafe Metadata FinishStreamWithMetadata(DeepSpeechStream stream, uint aNumResults);
unsafe Metadata FinishStreamWithMetadata(MozillaVoiceSttStream stream, uint aNumResults);
}
}

Просмотреть файл

@ -1,4 +1,4 @@
namespace DeepSpeechClient.Models
namespace MozillaVoiceSttClient.Models
{
/// <summary>
/// Stores the entire CTC output as an array of character metadata objects.

Просмотреть файл

@ -1,19 +1,19 @@
using System;
namespace DeepSpeechClient.Models
namespace MozillaVoiceSttClient.Models
{
/// <summary>
/// Wrapper of the pointer used for the decoding stream.
/// </summary>
public class DeepSpeechStream : IDisposable
public class MozillaVoiceSttStream : IDisposable
{
private unsafe IntPtr** _streamingStatePp;
/// <summary>
/// Initializes a new instance of <see cref="DeepSpeechStream"/>.
/// Initializes a new instance of <see cref="MozillaVoiceSttStream"/>.
/// </summary>
/// <param name="streamingStatePP">Native pointer of the native stream.</param>
public unsafe DeepSpeechStream(IntPtr** streamingStatePP)
public unsafe MozillaVoiceSttStream(IntPtr** streamingStatePP)
{
_streamingStatePp = streamingStatePP;
}

Просмотреть файл

@ -1,4 +1,4 @@
namespace DeepSpeechClient.Models
namespace MozillaVoiceSttClient.Models
{
/// <summary>
/// Stores the entire CTC output as an array of character metadata objects.

Просмотреть файл

@ -1,4 +1,4 @@
namespace DeepSpeechClient.Models
namespace MozillaVoiceSttClient.Models
{
/// <summary>
/// Stores each individual character, along with its timing information.

Просмотреть файл

@ -1,34 +1,34 @@
using DeepSpeechClient.Interfaces;
using DeepSpeechClient.Extensions;
using MozillaVoiceSttClient.Interfaces;
using MozillaVoiceSttClient.Extensions;
using System;
using System.IO;
using DeepSpeechClient.Enums;
using DeepSpeechClient.Models;
using MozillaVoiceSttClient.Enums;
using MozillaVoiceSttClient.Models;
namespace DeepSpeechClient
namespace MozillaVoiceSttClient
{
/// <summary>
/// Concrete implementation of <see cref="DeepSpeechClient.Interfaces.IDeepSpeech"/>.
/// Concrete implementation of <see cref="MozillaVoiceStt.Interfaces.IMozillaVoiceSttModel"/>.
/// </summary>
public class DeepSpeech : IDeepSpeech
public class MozillaVoiceSttModel : IMozillaVoiceSttModel
{
private unsafe IntPtr** _modelStatePP;
/// <summary>
/// Initializes a new instance of <see cref="DeepSpeech"/> class and creates a new acoustic model.
/// Initializes a new instance of <see cref="MozillaVoiceSttModel"/> class and creates a new acoustic model.
/// </summary>
/// <param name="aModelPath">The path to the frozen model graph.</param>
/// <exception cref="ArgumentException">Thrown when the native binary failed to create the model.</exception>
public DeepSpeech(string aModelPath)
public MozillaVoiceSttModel(string aModelPath)
{
CreateModel(aModelPath);
}
#region IDeepSpeech
#region IMozillaVoiceSttModel
/// <summary>
/// Create an object providing an interface to a trained DeepSpeech model.
/// Create an object providing an interface to a trained Mozilla Voice STT model.
/// </summary>
/// <param name="aModelPath">The path to the frozen model graph.</param>
/// <exception cref="ArgumentException">Thrown when the native binary failed to create the model.</exception>
@ -48,7 +48,7 @@ namespace DeepSpeechClient
{
throw new FileNotFoundException(exceptionMessage);
}
var resultCode = NativeImp.DS_CreateModel(aModelPath,
var resultCode = NativeImp.STT_CreateModel(aModelPath,
ref _modelStatePP);
EvaluateResultCode(resultCode);
}
@ -60,7 +60,7 @@ namespace DeepSpeechClient
/// <returns>Beam width value used by the model.</returns>
public unsafe uint GetModelBeamWidth()
{
return NativeImp.DS_GetModelBeamWidth(_modelStatePP);
return NativeImp.STT_GetModelBeamWidth(_modelStatePP);
}
/// <summary>
@ -70,7 +70,7 @@ namespace DeepSpeechClient
/// <exception cref="ArgumentException">Thrown on failure.</exception>
public unsafe void SetModelBeamWidth(uint aBeamWidth)
{
var resultCode = NativeImp.DS_SetModelBeamWidth(_modelStatePP, aBeamWidth);
var resultCode = NativeImp.STT_SetModelBeamWidth(_modelStatePP, aBeamWidth);
EvaluateResultCode(resultCode);
}
@ -80,7 +80,7 @@ namespace DeepSpeechClient
/// <returns>Sample rate.</returns>
public unsafe int GetModelSampleRate()
{
return NativeImp.DS_GetModelSampleRate(_modelStatePP);
return NativeImp.STT_GetModelSampleRate(_modelStatePP);
}
/// <summary>
@ -89,9 +89,9 @@ namespace DeepSpeechClient
/// <param name="resultCode">Native result code.</param>
private void EvaluateResultCode(ErrorCodes resultCode)
{
if (resultCode != ErrorCodes.DS_ERR_OK)
if (resultCode != ErrorCodes.STT_ERR_OK)
{
throw new ArgumentException(NativeImp.DS_ErrorCodeToErrorMessage((int)resultCode).PtrToString());
throw new ArgumentException(NativeImp.STT_ErrorCodeToErrorMessage((int)resultCode).PtrToString());
}
}
@ -100,7 +100,7 @@ namespace DeepSpeechClient
/// </summary>
public unsafe void Dispose()
{
NativeImp.DS_FreeModel(_modelStatePP);
NativeImp.STT_FreeModel(_modelStatePP);
}
/// <summary>
@ -120,7 +120,7 @@ namespace DeepSpeechClient
throw new FileNotFoundException($"Cannot find the scorer file: {aScorerPath}");
}
var resultCode = NativeImp.DS_EnableExternalScorer(_modelStatePP, aScorerPath);
var resultCode = NativeImp.STT_EnableExternalScorer(_modelStatePP, aScorerPath);
EvaluateResultCode(resultCode);
}
@ -130,7 +130,7 @@ namespace DeepSpeechClient
/// <exception cref="ArgumentException">Thrown when an external scorer is not enabled.</exception>
public unsafe void DisableExternalScorer()
{
var resultCode = NativeImp.DS_DisableExternalScorer(_modelStatePP);
var resultCode = NativeImp.STT_DisableExternalScorer(_modelStatePP);
EvaluateResultCode(resultCode);
}
@ -142,7 +142,7 @@ namespace DeepSpeechClient
/// <exception cref="ArgumentException">Thrown when an external scorer is not enabled.</exception>
public unsafe void SetScorerAlphaBeta(float aAlpha, float aBeta)
{
var resultCode = NativeImp.DS_SetScorerAlphaBeta(_modelStatePP,
var resultCode = NativeImp.STT_SetScorerAlphaBeta(_modelStatePP,
aAlpha,
aBeta);
EvaluateResultCode(resultCode);
@ -153,9 +153,9 @@ namespace DeepSpeechClient
/// </summary>
/// <param name="stream">Instance of the stream to feed the data.</param>
/// <param name="aBuffer">An array of 16-bit, mono raw audio samples at the appropriate sample rate (matching what the model was trained on).</param>
public unsafe void FeedAudioContent(DeepSpeechStream stream, short[] aBuffer, uint aBufferSize)
public unsafe void FeedAudioContent(MozillaVoiceSttStream stream, short[] aBuffer, uint aBufferSize)
{
NativeImp.DS_FeedAudioContent(stream.GetNativePointer(), aBuffer, aBufferSize);
NativeImp.STT_FeedAudioContent(stream.GetNativePointer(), aBuffer, aBufferSize);
}
/// <summary>
@ -163,9 +163,9 @@ namespace DeepSpeechClient
/// </summary>
/// <param name="stream">Instance of the stream to finish.</param>
/// <returns>The STT result.</returns>
public unsafe string FinishStream(DeepSpeechStream stream)
public unsafe string FinishStream(MozillaVoiceSttStream stream)
{
return NativeImp.DS_FinishStream(stream.GetNativePointer()).PtrToString();
return NativeImp.STT_FinishStream(stream.GetNativePointer()).PtrToString();
}
/// <summary>
@ -174,9 +174,9 @@ namespace DeepSpeechClient
/// <param name="stream">Instance of the stream to finish.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The extended metadata result.</returns>
public unsafe Metadata FinishStreamWithMetadata(DeepSpeechStream stream, uint aNumResults)
public unsafe Metadata FinishStreamWithMetadata(MozillaVoiceSttStream stream, uint aNumResults)
{
return NativeImp.DS_FinishStreamWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
return NativeImp.STT_FinishStreamWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
}
/// <summary>
@ -184,9 +184,9 @@ namespace DeepSpeechClient
/// </summary>
/// <param name="stream">Instance of the stream to decode.</param>
/// <returns>The STT intermediate result.</returns>
public unsafe string IntermediateDecode(DeepSpeechStream stream)
public unsafe string IntermediateDecode(MozillaVoiceSttStream stream)
{
return NativeImp.DS_IntermediateDecode(stream.GetNativePointer()).PtrToString();
return NativeImp.STT_IntermediateDecode(stream.GetNativePointer()).PtrToString();
}
/// <summary>
@ -195,9 +195,9 @@ namespace DeepSpeechClient
/// <param name="stream">Instance of the stream to decode.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The STT intermediate result.</returns>
public unsafe Metadata IntermediateDecodeWithMetadata(DeepSpeechStream stream, uint aNumResults)
public unsafe Metadata IntermediateDecodeWithMetadata(MozillaVoiceSttStream stream, uint aNumResults)
{
return NativeImp.DS_IntermediateDecodeWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
return NativeImp.STT_IntermediateDecodeWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
}
/// <summary>
@ -206,18 +206,18 @@ namespace DeepSpeechClient
/// </summary>
public unsafe string Version()
{
return NativeImp.DS_Version().PtrToString();
return NativeImp.STT_Version().PtrToString();
}
/// <summary>
/// Creates a new streaming inference state.
/// </summary>
public unsafe DeepSpeechStream CreateStream()
public unsafe MozillaVoiceSttStream CreateStream()
{
IntPtr** streamingStatePointer = null;
var resultCode = NativeImp.DS_CreateStream(_modelStatePP, ref streamingStatePointer);
var resultCode = NativeImp.STT_CreateStream(_modelStatePP, ref streamingStatePointer);
EvaluateResultCode(resultCode);
return new DeepSpeechStream(streamingStatePointer);
return new MozillaVoiceSttStream(streamingStatePointer);
}
/// <summary>
@ -225,25 +225,25 @@ namespace DeepSpeechClient
/// This can be used if you no longer need the result of an ongoing streaming
/// inference and don't want to perform a costly decode operation.
/// </summary>
public unsafe void FreeStream(DeepSpeechStream stream)
public unsafe void FreeStream(MozillaVoiceSttStream stream)
{
NativeImp.DS_FreeStream(stream.GetNativePointer());
NativeImp.STT_FreeStream(stream.GetNativePointer());
stream.Dispose();
}
/// <summary>
/// Use the DeepSpeech model to perform Speech-To-Text.
/// Use the Mozilla Voice STT model to perform Speech-To-Text.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
/// <returns>The STT result. Returns NULL on error.</returns>
public unsafe string SpeechToText(short[] aBuffer, uint aBufferSize)
{
return NativeImp.DS_SpeechToText(_modelStatePP, aBuffer, aBufferSize).PtrToString();
return NativeImp.STT_SpeechToText(_modelStatePP, aBuffer, aBufferSize).PtrToString();
}
/// <summary>
/// Use the DeepSpeech model to perform Speech-To-Text, return results including metadata.
/// Use the Mozilla Voice STT model to perform Speech-To-Text, return results including metadata.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
@ -251,7 +251,7 @@ namespace DeepSpeechClient
/// <returns>The extended metadata. Returns NULL on error.</returns>
public unsafe Metadata SpeechToTextWithMetadata(short[] aBuffer, uint aBufferSize, uint aNumResults)
{
return NativeImp.DS_SpeechToTextWithMetadata(_modelStatePP, aBuffer, aBufferSize, aNumResults).PtrToMetadata();
return NativeImp.STT_SpeechToTextWithMetadata(_modelStatePP, aBuffer, aBufferSize, aNumResults).PtrToMetadata();
}
#endregion

Просмотреть файл

@ -0,0 +1,102 @@
using MozillaVoiceSttClient.Enums;
using System;
using System.Runtime.InteropServices;
namespace MozillaVoiceSttClient
{
/// <summary>
/// Wrapper for the native implementation of "libmozilla_voice_stt.so"
/// </summary>
internal static class NativeImp
{
#region Native Implementation
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static extern IntPtr STT_Version();
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes STT_CreateModel(string aModelPath,
ref IntPtr** pint);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern IntPtr STT_ErrorCodeToErrorMessage(int aErrorCode);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern uint STT_GetModelBeamWidth(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes STT_SetModelBeamWidth(IntPtr** aCtx,
uint aBeamWidth);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes STT_CreateModel(string aModelPath,
uint aBeamWidth,
ref IntPtr** pint);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern int STT_GetModelSampleRate(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_EnableExternalScorer(IntPtr** aCtx,
string aScorerPath);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_DisableExternalScorer(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_SetScorerAlphaBeta(IntPtr** aCtx,
float aAlpha,
float aBeta);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr STT_SpeechToText(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl, SetLastError = true)]
internal static unsafe extern IntPtr STT_SpeechToTextWithMetadata(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize,
uint aNumResults);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeModel(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_CreateStream(IntPtr** aCtx,
ref IntPtr** retval);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeStream(IntPtr** aSctx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeMetadata(IntPtr metadata);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeString(IntPtr str);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern void STT_FeedAudioContent(IntPtr** aSctx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr STT_IntermediateDecode(IntPtr** aSctx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr STT_IntermediateDecodeWithMetadata(IntPtr** aSctx,
uint aNumResults);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr STT_FinishStream(IntPtr** aSctx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr STT_FinishStreamWithMetadata(IntPtr** aSctx,
uint aNumResults);
#endregion
}
}

Просмотреть файл

@ -1,7 +1,7 @@
using System;
using System.Runtime.InteropServices;
namespace DeepSpeechClient.Structs
namespace MozillaVoiceSttClient.Structs
{
[StructLayout(LayoutKind.Sequential)]
internal unsafe struct CandidateTranscript

Просмотреть файл

@ -1,7 +1,7 @@
using System;
using System.Runtime.InteropServices;
namespace DeepSpeechClient.Structs
namespace MozillaVoiceSttClient.Structs
{
[StructLayout(LayoutKind.Sequential)]
internal unsafe struct Metadata

Просмотреть файл

@ -1,7 +1,7 @@
using System;
using System.Runtime.InteropServices;
namespace DeepSpeechClient.Structs
namespace MozillaVoiceSttClient.Structs
{
[StructLayout(LayoutKind.Sequential)]
internal unsafe struct TokenMetadata

Просмотреть файл

@ -6,8 +6,8 @@
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProjectGuid>{312965E5-C4F6-4D95-BA64-79906B8BC7AC}</ProjectGuid>
<OutputType>Exe</OutputType>
<RootNamespace>DeepSpeechConsole</RootNamespace>
<AssemblyName>DeepSpeechConsole</AssemblyName>
<RootNamespace>MozillaVoiceSttConsole</RootNamespace>
<AssemblyName>MozillaVoiceSttConsole</AssemblyName>
<TargetFrameworkVersion>v4.6.2</TargetFrameworkVersion>
<FileAlignment>512</FileAlignment>
<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
@ -56,9 +56,9 @@
<None Include="packages.config" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\DeepSpeechClient\DeepSpeechClient.csproj">
<ProjectReference Include="..\MozillaVoiceSttClient\MozillaVoiceSttClient.csproj">
<Project>{56DE4091-BBBE-47E4-852D-7268B33B971F}</Project>
<Name>DeepSpeechClient</Name>
<Name>MozillaVoiceSttClient</Name>
</ProjectReference>
</ItemGroup>
<ItemGroup>

Просмотреть файл

@ -1,6 +1,6 @@
using DeepSpeechClient;
using DeepSpeechClient.Interfaces;
using DeepSpeechClient.Models;
using MozillaVoiceSttClient;
using MozillaVoiceSttClient.Interfaces;
using MozillaVoiceSttClient.Models;
using NAudio.Wave;
using System;
using System.Collections.Generic;
@ -52,7 +52,7 @@ namespace CSharpExamples
Console.WriteLine("Loading model...");
stopwatch.Start();
// sphinx-doc: csharp_ref_model_start
using (IDeepSpeech sttClient = new DeepSpeech(model ?? "output_graph.pbmm"))
using (IMozillaVoiceSttModel sttClient = new MozillaVoiceSttModel(model ?? "output_graph.pbmm"))
{
// sphinx-doc: csharp_ref_model_stop
stopwatch.Stop();

Просмотреть файл

@ -5,7 +5,7 @@ using System.Runtime.InteropServices;
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("DeepSpeechConsole")]
[assembly: AssemblyTitle("MozillaVoiceSttConsole")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]

Просмотреть файл

Просмотреть файл

@ -1,8 +1,8 @@
<Application
x:Class="DeepSpeechWPF.App"
x:Class="MozillaVoiceSttWPF.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:DeepSpeechWPF"
xmlns:local="clr-namespace:MozillaVoiceSttWPF"
StartupUri="MainWindow.xaml">
<Application.Resources />
</Application>

Просмотреть файл

@ -1,10 +1,10 @@
using CommonServiceLocator;
using DeepSpeech.WPF.ViewModels;
using DeepSpeechClient.Interfaces;
using MozillaVoiceStt.WPF.ViewModels;
using MozillaVoiceSttClient.Interfaces;
using GalaSoft.MvvmLight.Ioc;
using System.Windows;
namespace DeepSpeechWPF
namespace MozillaVoiceSttWPF
{
/// <summary>
/// Interaction logic for App.xaml
@ -18,11 +18,11 @@ namespace DeepSpeechWPF
try
{
//Register instance of DeepSpeech
DeepSpeechClient.DeepSpeech deepSpeechClient =
new DeepSpeechClient.DeepSpeech("deepspeech-0.8.0-models.pbmm");
//Register instance of Mozilla Voice STT
MozillaVoiceSttClient.MozillaVoiceSttModel client =
new MozillaVoiceSttClient.MozillaVoiceSttModel("deepspeech-0.8.0-models.pbmm");
SimpleIoc.Default.Register<IDeepSpeech>(() => deepSpeechClient);
SimpleIoc.Default.Register<IMozillaVoiceSttModel>(() => client);
SimpleIoc.Default.Register<MainWindowViewModel>();
}
catch (System.Exception ex)
@ -35,8 +35,8 @@ namespace DeepSpeechWPF
protected override void OnExit(ExitEventArgs e)
{
base.OnExit(e);
//Dispose instance of DeepSpeech
ServiceLocator.Current.GetInstance<IDeepSpeech>()?.Dispose();
//Dispose instance of Mozilla Voice STT
ServiceLocator.Current.GetInstance<IMozillaVoiceSttModel>()?.Dispose();
}
}
}

Просмотреть файл

@ -1,10 +1,10 @@
<Window
x:Class="DeepSpeechWPF.MainWindow"
x:Class="MozillaVoiceSttWPF.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
Title="Deepspeech client"
Title="Mozilla Voice STT Client"
Width="800"
Height="600"
Loaded="Window_Loaded"

Просмотреть файл

@ -1,8 +1,8 @@
using CommonServiceLocator;
using DeepSpeech.WPF.ViewModels;
using MozillaVoiceStt.WPF.ViewModels;
using System.Windows;
namespace DeepSpeechWPF
namespace MozillaVoiceSttWPF
{
/// <summary>
/// Interaction logic for MainWindow.xaml

Просмотреть файл

@ -6,8 +6,8 @@
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProjectGuid>{54BFD766-4305-4F4C-BA59-AF45505DF3C1}</ProjectGuid>
<OutputType>WinExe</OutputType>
<RootNamespace>DeepSpeech.WPF</RootNamespace>
<AssemblyName>DeepSpeech.WPF</AssemblyName>
<RootNamespace>MozillaVoiceStt.WPF</RootNamespace>
<AssemblyName>MozillaVoiceStt.WPF</AssemblyName>
<TargetFrameworkVersion>v4.6.2</TargetFrameworkVersion>
<FileAlignment>512</FileAlignment>
<ProjectTypeGuids>{60dc8134-eba5-43b8-bcc9-bb4bc16c2548};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
@ -131,9 +131,9 @@
<None Include="App.config" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\DeepSpeechClient\DeepSpeechClient.csproj">
<ProjectReference Include="..\MozillaVoiceSttClient\MozillaVoiceSttClient.csproj">
<Project>{56de4091-bbbe-47e4-852d-7268b33b971f}</Project>
<Name>DeepSpeechClient</Name>
<Name>MozillaVoiceSttClient</Name>
</ProjectReference>
</ItemGroup>
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

Просмотреть файл

@ -3,9 +3,9 @@ Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.28307.421
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeech.WPF", "DeepSpeech.WPF.csproj", "{54BFD766-4305-4F4C-BA59-AF45505DF3C1}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceStt.WPF", "MozillaVoiceStt.WPF.csproj", "{54BFD766-4305-4F4C-BA59-AF45505DF3C1}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeechClient", "..\DeepSpeechClient\DeepSpeechClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceSttClient", "..\MozillaVoiceSttClient\MozillaVoiceSttClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution

Просмотреть файл

@ -7,11 +7,11 @@ using System.Windows;
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("DeepSpeech.WPF")]
[assembly: AssemblyTitle("MozillaVoiceStt.WPF")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]
[assembly: AssemblyProduct("DeepSpeech.WPF.SingleFiles")]
[assembly: AssemblyProduct("MozillaVoiceStt.WPF.SingleFiles")]
[assembly: AssemblyCopyright("Copyright © 2018")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]

Просмотреть файл

@ -8,7 +8,7 @@
// </auto-generated>
//------------------------------------------------------------------------------
namespace DeepSpeech.WPF.Properties {
namespace MozillaVoiceStt.WPF.Properties {
using System;
@ -39,7 +39,7 @@ namespace DeepSpeech.WPF.Properties {
internal static global::System.Resources.ResourceManager ResourceManager {
get {
if (object.ReferenceEquals(resourceMan, null)) {
global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("DeepSpeech.WPF.Properties.Resources", typeof(Resources).Assembly);
global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("MozillaVoiceStt.WPF.Properties.Resources", typeof(Resources).Assembly);
resourceMan = temp;
}
return resourceMan;

Просмотреть файл

@ -8,7 +8,7 @@
// </auto-generated>
//------------------------------------------------------------------------------
namespace DeepSpeech.WPF.Properties {
namespace MozillaVoiceStt.WPF.Properties {
[global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()]

Просмотреть файл

@ -3,7 +3,7 @@ using System.Collections.Generic;
using System.ComponentModel;
using System.Runtime.CompilerServices;
namespace DeepSpeech.WPF.ViewModels
namespace MozillaVoiceStt.WPF.ViewModels
{
/// <summary>
/// Implementation of <see cref="INotifyPropertyChanged"/> to simplify models.

Просмотреть файл

@ -3,8 +3,8 @@ using CSCore;
using CSCore.CoreAudioAPI;
using CSCore.SoundIn;
using CSCore.Streams;
using DeepSpeechClient.Interfaces;
using DeepSpeechClient.Models;
using MozillaVoiceSttClient.Interfaces;
using MozillaVoiceSttClient.Models;
using GalaSoft.MvvmLight.CommandWpf;
using Microsoft.Win32;
using System;
@ -15,7 +15,7 @@ using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace DeepSpeech.WPF.ViewModels
namespace MozillaVoiceStt.WPF.ViewModels
{
/// <summary>
/// View model of the MainWindow View.
@ -27,7 +27,7 @@ namespace DeepSpeech.WPF.ViewModels
private const string ScorerPath = "kenlm.scorer";
#endregion
private readonly IDeepSpeech _sttClient;
private readonly IMozillaVoiceSttModel _sttClient;
#region Commands
/// <summary>
@ -62,7 +62,7 @@ namespace DeepSpeech.WPF.ViewModels
/// <summary>
/// Stream used to feed data into the acoustic model.
/// </summary>
private DeepSpeechStream _sttStream;
private MozillaVoiceSttStream _sttStream;
/// <summary>
/// Records the audio of the selected device.
@ -75,7 +75,7 @@ namespace DeepSpeech.WPF.ViewModels
private SoundInSource _soundInSource;
/// <summary>
/// Target wave source.(16KHz Mono 16bit for DeepSpeech)
/// Target wave source.(16KHz Mono 16bit for Mozilla Voice STT)
/// </summary>
private IWaveSource _convertedSource;
@ -200,7 +200,7 @@ namespace DeepSpeech.WPF.ViewModels
#endregion
#region Ctors
public MainWindowViewModel(IDeepSpeech sttClient)
public MainWindowViewModel(IMozillaVoiceSttModel sttClient)
{
_sttClient = sttClient;
@ -290,7 +290,8 @@ namespace DeepSpeech.WPF.ViewModels
//read data from the converedSource
//important: don't use the e.Data here
//the e.Data contains the raw data provided by the
//soundInSource which won't have the deepspeech required audio format
//soundInSource which won't have the Mozilla Voice STT required
// audio format
byte[] buffer = new byte[_convertedSource.WaveFormat.BytesPerSecond / 2];
int read;

Просмотреть файл

@ -1,8 +1,8 @@
Building DeepSpeech native client for Windows
Building Mozilla Voice STT native client for Windows
=============================================
Now we can build the native client of DeepSpeech and run inference on Windows using the C# client, to do that we need to compile the ``native_client``.
Now we can build the native client of Mozilla Voice STT and run inference on Windows using the C# client, to do that we need to compile the ``native_client``.
**Table of Contents**
@ -59,8 +59,8 @@ There should already be a symbolic link, for this example let's suppose that we
.
├── D:\
│ ├── cloned # Contains DeepSpeech and tensorflow side by side
│ │ └── DeepSpeech # Root of the cloned DeepSpeech
│ ├── cloned # Contains Mozilla Voice STT and tensorflow side by side
│ │ └── DeepSpeech # Root of the cloned Mozilla Voice STT
│ │ ├── tensorflow # Root of the cloned Mozilla's tensorflow
└── ...
@ -126,7 +126,7 @@ We will add AVX/AVX2 support in the command, please make sure that your CPU supp
.. code-block:: bash
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libmozilla_voice_stt.so
GPU with CUDA
~~~~~~~~~~~~~
@ -135,11 +135,11 @@ If you enabled CUDA in `configure.py <https://github.com/mozilla/tensorflow/blob
.. code-block:: bash
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --config=cuda --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --config=cuda --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libmozilla_voice_stt.so
Be patient, if you enabled AVX/AVX2 and CUDA it will take a long time. Finally you should see it stops and shows the path to the generated ``libdeepspeech.so``.
Be patient, if you enabled AVX/AVX2 and CUDA it will take a long time. Finally you should see it stops and shows the path to the generated ``libmozilla_voice_stt.so``.
Using the generated library
---------------------------
As for now we can only use the generated ``libdeepspeech.so`` with the C# clients, go to `native_client/dotnet/ <https://github.com/mozilla/DeepSpeech/tree/master/native_client/dotnet>`_ in your DeepSpeech directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy ``libdeepspeech.so`` to the generated ``x64/Debug`` or ``x64/Release`` directory.
As for now we can only use the generated ``libmozilla_voice_stt.so`` with the C# clients, go to `native_client/dotnet/ <https://github.com/mozilla/DeepSpeech/tree/master/native_client/dotnet>`_ in your Mozilla Voice STT directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy ``libmozilla_voice_stt.so`` to the generated ``x64/Debug`` or ``x64/Release`` directory.

Просмотреть файл

@ -3,13 +3,13 @@
<metadata>
<id>$NUPKG_ID</id>
<version>$NUPKG_VERSION</version>
<title>DeepSpeech</title>
<title>Mozilla.Voice.STT</title>
<authors>Mozilla</authors>
<owners>Mozilla</owners>
<license type="expression">MPL-2.0</license>
<projectUrl>http://github.com/mozilla/DeepSpeech</projectUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>A library for running inference with a DeepSpeech model</description>
<description>A library for running inference with a Mozilla Voice STT model</description>
<copyright>Copyright (c) 2019 Mozilla Corporation</copyright>
<tags>native speech speech_recognition</tags>
</metadata>

Просмотреть файл

@ -11,7 +11,7 @@ using namespace std;
#include "ctcdecode/decoder_utils.h"
#include "ctcdecode/scorer.h"
#include "alphabet.h"
#include "deepspeech.h"
#include "mozilla_voice_stt.h"
namespace po = boost::program_options;
@ -66,9 +66,9 @@ create_package(absl::optional<string> alphabet_path,
scorer.set_utf8_mode(force_utf8.value());
scorer.reset_params(default_alpha, default_beta);
int err = scorer.load_lm(lm_path);
if (err != DS_ERR_SCORER_NO_TRIE) {
if (err != STT_ERR_SCORER_NO_TRIE) {
cerr << "Error loading language model file: "
<< DS_ErrorCodeToErrorMessage(err) << "\n";
<< STT_ErrorCodeToErrorMessage(err) << "\n";
return 1;
}
scorer.fill_dictionary(words);

Просмотреть файл

@ -2,7 +2,7 @@
include ../definitions.mk
ARCHS := $(shell grep 'ABI_FILTERS' libdeepspeech/gradle.properties | cut -d'=' -f2 | sed -e 's/;/ /g')
ARCHS := $(shell grep 'ABI_FILTERS' libmozillavoicestt/gradle.properties | cut -d'=' -f2 | sed -e 's/;/ /g')
GRADLE ?= ./gradlew
all: apk
@ -14,13 +14,13 @@ apk-clean:
$(GRADLE) clean
libs-clean:
rm -fr libdeepspeech/libs/*/libdeepspeech.so
rm -fr libmozillavoicestt/libs/*/libmozilla_voice_stt.so
libdeepspeech/libs/%/libdeepspeech.so:
-mkdir libdeepspeech/libs/$*/
cp ${TFDIR}/bazel-out/$*-*/bin/native_client/libdeepspeech.so libdeepspeech/libs/$*/
libmozillavoicestt/libs/%/libmozilla_voice_stt.so:
-mkdir libmozillavoicestt/libs/$*/
cp ${TFDIR}/bazel-out/$*-*/bin/native_client/libmozilla_voice_stt.so libmozillavoicestt/libs/$*/
apk: apk-clean bindings $(patsubst %,libdeepspeech/libs/%/libdeepspeech.so,$(ARCHS))
apk: apk-clean bindings $(patsubst %,libmozillavoicestt/libs/%/libmozilla_voice_stt.so,$(ARCHS))
$(GRADLE) build
maven-bundle: apk
@ -28,4 +28,4 @@ maven-bundle: apk
$(GRADLE) zipMavenArtifacts
bindings: clean ds-swig
$(DS_SWIG_ENV) swig -c++ -java -package org.mozilla.deepspeech.libdeepspeech -outdir libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech/ -o jni/deepspeech_wrap.cpp jni/deepspeech.i
$(DS_SWIG_ENV) swig -c++ -java -package org.mozilla.voice.stt -outdir libmozillavoicestt/src/main/java/org/mozilla/voice/stt/ -o jni/deepspeech_wrap.cpp jni/deepspeech.i

Просмотреть файл

@ -4,7 +4,7 @@ android {
compileSdkVersion 27
defaultConfig {
applicationId "org.mozilla.deepspeech"
applicationId "org.mozilla.voice.sttapp"
minSdkVersion 21
targetSdkVersion 27
versionName androidGitVersion.name()
@ -28,7 +28,7 @@ android {
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation project(':libdeepspeech')
implementation project(':libmozillavoicestt')
implementation 'com.android.support:appcompat-v7:27.1.1'
implementation 'com.android.support.constraint:constraint-layout:1.1.3'
testImplementation 'junit:junit:4.12'

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.deepspeech;
package org.mozilla.voice.sttapp;
import android.content.Context;
import android.support.test.InstrumentationRegistry;
@ -21,6 +21,6 @@ public class ExampleInstrumentedTest {
// Context of the app under test.
Context appContext = InstrumentationRegistry.getTargetContext();
assertEquals("org.mozilla.deepspeech", appContext.getPackageName());
assertEquals("org.mozilla.voice.sttapp", appContext.getPackageName());
}
}

Просмотреть файл

@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="org.mozilla.deepspeech">
package="org.mozilla.voice.sttapp">
<application
android:allowBackup="true"
@ -9,7 +9,7 @@
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".DeepSpeechActivity">
<activity android:name=".MozillaVoiceSttActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.deepspeech;
package org.mozilla.voice.sttapp;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
@ -16,11 +16,11 @@ import java.io.IOException;
import java.nio.ByteOrder;
import java.nio.ByteBuffer;
import org.mozilla.deepspeech.libdeepspeech.DeepSpeechModel;
import org.mozilla.voice.stt.MozillaVoiceSttModel;
public class DeepSpeechActivity extends AppCompatActivity {
public class MozillaVoiceSttActivity extends AppCompatActivity {
DeepSpeechModel _m = null;
MozillaVoiceSttModel _m = null;
EditText _tfliteModel;
EditText _audioFile;
@ -50,7 +50,7 @@ public class DeepSpeechActivity extends AppCompatActivity {
this._tfliteStatus.setText("Creating model");
if (this._m == null) {
// sphinx-doc: java_ref_model_start
this._m = new DeepSpeechModel(tfliteModel);
this._m = new MozillaVoiceSttModel(tfliteModel);
this._m.setBeamWidth(BEAM_WIDTH);
// sphinx-doc: java_ref_model_stop
}

Просмотреть файл

@ -4,7 +4,7 @@
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".DeepSpeechActivity">
tools:context=".MozillaVoiceSttActivity">
<!--
<TextView

Просмотреть файл

@ -1,3 +1,3 @@
<resources>
<string name="app_name">DeepSpeech</string>
<string name="app_name">Mozilla Voice STT</string>
</resources>

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.deepspeech.libdeepspeech;
package org.mozilla.voice.sttapp;
import org.junit.Test;

Просмотреть файл

@ -2,7 +2,7 @@
%{
#define SWIG_FILE_WITH_INIT
#include "../../deepspeech.h"
#include "../../mozilla_voice_stt.h"
%}
%include "typemaps.i"
@ -10,7 +10,7 @@
%javaconst(1);
%include "arrays_java.i"
// apply to DS_FeedAudioContent and DS_SpeechToText
// apply to STT_FeedAudioContent and STT_SpeechToText
%apply short[] { short* };
%include "cpointer.i"
@ -43,7 +43,7 @@
}
~Metadata() {
DS_FreeMetadata(self);
STT_FreeMetadata(self);
}
}
@ -54,13 +54,13 @@
%nodefaultctor TokenMetadata;
%nodefaultdtor TokenMetadata;
%typemap(newfree) char* "DS_FreeString($1);";
%newobject DS_SpeechToText;
%newobject DS_IntermediateDecode;
%newobject DS_FinishStream;
%newobject DS_ErrorCodeToErrorMessage;
%typemap(newfree) char* "STT_FreeString($1);";
%newobject STT_SpeechToText;
%newobject STT_IntermediateDecode;
%newobject STT_FinishStream;
%newobject STT_ErrorCodeToErrorMessage;
%rename ("%(strip:[DS_])s") "";
%rename ("%(strip:[STT_])s") "";
// make struct members camel case to suit Java conventions
%rename ("%(camelcase)s", %$ismember) "";
@ -71,4 +71,4 @@
%ignore "Metadata::transcripts";
%ignore "CandidateTranscript::tokens";
%include "../deepspeech.h"
%include "../mozilla_voice_stt.h"

Просмотреть файл

@ -1,13 +0,0 @@
package org.mozilla.deepspeech.libdeepspeech;
public final class DeepSpeechStreamingState {
private SWIGTYPE_p_StreamingState _sp;
public DeepSpeechStreamingState(SWIGTYPE_p_StreamingState sp) {
this._sp = sp;
}
public SWIGTYPE_p_StreamingState get() {
return this._sp;
}
}

Просмотреть файл

@ -1,3 +0,0 @@
<resources>
<string name="app_name">libdeepspeech</string>
</resources>

Просмотреть файл

Просмотреть файл

@ -26,12 +26,12 @@ add_library( deepspeech-lib
set_target_properties( deepspeech-lib
PROPERTIES
IMPORTED_LOCATION
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libdeepspeech.so )
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libmozilla_voice_stt.so )
add_custom_command( TARGET deepspeech-jni POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libdeepspeech.so
${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libdeepspeech.so )
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libmozilla_voice_stt.so
${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libmozilla_voice_stt.so )
# Searches for a specified prebuilt library and stores the path as a

Просмотреть файл

@ -44,9 +44,9 @@ android {
installOptions "-d","-t"
}
// Avoid scanning libdeepspeech_doc
// Avoid scanning stt_doc
sourceSets {
main.java.srcDirs = [ 'src/main/java/org/mozilla/deepspeech/libdeepspeech/' ]
main.java.srcDirs = [ 'src/main/java/org/mozilla/voice/stt/' ]
}
}
@ -76,9 +76,9 @@ uploadArchives {
repositories {
mavenDeployer {
pom.packaging = 'aar'
pom.name = 'libdeepspeech'
pom.groupId = 'org.mozilla.deepspeech'
pom.artifactId = 'libdeepspeech'
pom.name = 'stt'
pom.groupId = 'org.mozilla.voice'
pom.artifactId = 'stt'
pom.version = dsVersionString + (project.hasProperty('snapshot') ? '-SNAPSHOT' : '')
pom.project {
@ -95,8 +95,8 @@ uploadArchives {
developers {
developer {
id 'deepspeech'
name 'Mozilla DeepSpeech Team'
id 'mozillavoicestt'
name 'Mozilla Voice STT Team'
email 'deepspeechs@lists.mozilla.org'
}
}

Просмотреть файл

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.deepspeech.libdeepspeech.test;
package org.mozilla.voice.stt.test;
import android.content.Context;
import android.support.test.InstrumentationRegistry;
@ -11,8 +11,8 @@ import org.junit.runners.MethodSorters;
import static org.junit.Assert.*;
import org.mozilla.deepspeech.libdeepspeech.DeepSpeechModel;
import org.mozilla.deepspeech.libdeepspeech.CandidateTranscript;
import org.mozilla.voice.stt.MozillaVoiceSttModel;
import org.mozilla.voice.stt.CandidateTranscript;
import java.io.RandomAccessFile;
import java.io.FileNotFoundException;
@ -52,12 +52,12 @@ public class BasicTest {
// Context of the app under test.
Context appContext = InstrumentationRegistry.getTargetContext();
assertEquals("org.mozilla.deepspeech.libdeepspeech.test", appContext.getPackageName());
assertEquals("org.mozilla.voice.stt.test", appContext.getPackageName());
}
@Test
public void loadDeepSpeech_basic() {
DeepSpeechModel m = new DeepSpeechModel(modelFile);
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
m.freeModel();
}
@ -69,7 +69,7 @@ public class BasicTest {
return retval;
}
private String doSTT(DeepSpeechModel m, boolean extendedMetadata) {
private String doSTT(MozillaVoiceSttModel m, boolean extendedMetadata) {
try {
RandomAccessFile wave = new RandomAccessFile(wavFile, "r");
@ -114,7 +114,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_stt_noLM() {
DeepSpeechModel m = new DeepSpeechModel(modelFile);
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
String decoded = doSTT(m, false);
assertEquals("she had your dark suit in greasy wash water all year", decoded);
@ -123,7 +123,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_stt_withLM() {
DeepSpeechModel m = new DeepSpeechModel(modelFile);
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
m.enableExternalScorer(scorerFile);
String decoded = doSTT(m, false);
@ -133,7 +133,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_sttWithMetadata_noLM() {
DeepSpeechModel m = new DeepSpeechModel(modelFile);
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
String decoded = doSTT(m, true);
assertEquals("she had your dark suit in greasy wash water all year", decoded);
@ -142,7 +142,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_sttWithMetadata_withLM() {
DeepSpeechModel m = new DeepSpeechModel(modelFile);
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
m.enableExternalScorer(scorerFile);
String decoded = doSTT(m, true);

Просмотреть файл

@ -1,2 +1,2 @@
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="org.mozilla.deepspeech.libdeepspeech" />
package="org.mozilla.voice.stt" />

Просмотреть файл

@ -1,13 +1,13 @@
package org.mozilla.deepspeech.libdeepspeech;
package org.mozilla.voice.stt;
/**
* @brief Exposes a DeepSpeech model in Java
* @brief Exposes a Mozilla Voice STT model in Java
**/
public class DeepSpeechModel {
public class MozillaVoiceSttModel {
static {
System.loadLibrary("deepspeech-jni");
System.loadLibrary("deepspeech");
System.loadLibrary("mozilla_voice_stt");
}
// FIXME: We should have something better than those SWIGTYPE_*
@ -15,14 +15,14 @@ public class DeepSpeechModel {
private SWIGTYPE_p_ModelState _msp;
private void evaluateErrorCode(int errorCode) {
DeepSpeech_Error_Codes code = DeepSpeech_Error_Codes.swigToEnum(errorCode);
if (code != DeepSpeech_Error_Codes.ERR_OK) {
Error_Codes code = Error_Codes.swigToEnum(errorCode);
if (code != Error_Codes.ERR_OK) {
throw new RuntimeException("Error: " + impl.ErrorCodeToErrorMessage(errorCode) + " (0x" + Integer.toHexString(errorCode) + ").");
}
}
/**
* @brief An object providing an interface to a trained DeepSpeech model.
* @brief An object providing an interface to a trained Mozilla Voice STT model.
*
* @constructor
*
@ -30,7 +30,7 @@ public class DeepSpeechModel {
*
* @throws RuntimeException on failure.
*/
public DeepSpeechModel(String modelPath) {
public MozillaVoiceSttModel(String modelPath) {
this._mspp = impl.new_modelstatep();
evaluateErrorCode(impl.CreateModel(modelPath, this._mspp));
this._msp = impl.modelstatep_value(this._mspp);
@ -107,7 +107,7 @@ public class DeepSpeechModel {
}
/*
* @brief Use the DeepSpeech model to perform Speech-To-Text.
* @brief Use the Mozilla Voice STT model to perform Speech-To-Text.
*
* @param buffer A 16-bit, mono raw audio signal at the appropriate
* sample rate (matching what the model was trained on).
@ -120,7 +120,7 @@ public class DeepSpeechModel {
}
/**
* @brief Use the DeepSpeech model to perform Speech-To-Text and output metadata
* @brief Use the Mozilla Voice STT model to perform Speech-To-Text and output metadata
* about the results.
*
* @param buffer A 16-bit, mono raw audio signal at the appropriate
@ -144,10 +144,10 @@ public class DeepSpeechModel {
*
* @throws RuntimeException on failure.
*/
public DeepSpeechStreamingState createStream() {
public MozillaVoiceSttStreamingState createStream() {
SWIGTYPE_p_p_StreamingState ssp = impl.new_streamingstatep();
evaluateErrorCode(impl.CreateStream(this._msp, ssp));
return new DeepSpeechStreamingState(impl.streamingstatep_value(ssp));
return new MozillaVoiceSttStreamingState(impl.streamingstatep_value(ssp));
}
/**
@ -158,7 +158,7 @@ public class DeepSpeechModel {
* appropriate sample rate (matching what the model was trained on).
* @param buffer_size The number of samples in @p buffer.
*/
public void feedAudioContent(DeepSpeechStreamingState ctx, short[] buffer, int buffer_size) {
public void feedAudioContent(MozillaVoiceSttStreamingState ctx, short[] buffer, int buffer_size) {
impl.FeedAudioContent(ctx.get(), buffer, buffer_size);
}
@ -169,7 +169,7 @@ public class DeepSpeechModel {
*
* @return The STT intermediate result.
*/
public String intermediateDecode(DeepSpeechStreamingState ctx) {
public String intermediateDecode(MozillaVoiceSttStreamingState ctx) {
return impl.IntermediateDecode(ctx.get());
}
@ -181,7 +181,7 @@ public class DeepSpeechModel {
*
* @return The STT intermediate result.
*/
public Metadata intermediateDecodeWithMetadata(DeepSpeechStreamingState ctx, int num_results) {
public Metadata intermediateDecodeWithMetadata(MozillaVoiceSttStreamingState ctx, int num_results) {
return impl.IntermediateDecodeWithMetadata(ctx.get(), num_results);
}
@ -195,7 +195,7 @@ public class DeepSpeechModel {
*
* @note This method will free the state pointer (@p ctx).
*/
public String finishStream(DeepSpeechStreamingState ctx) {
public String finishStream(MozillaVoiceSttStreamingState ctx) {
return impl.FinishStream(ctx.get());
}
@ -212,7 +212,7 @@ public class DeepSpeechModel {
*
* @note This method will free the state pointer (@p ctx).
*/
public Metadata finishStreamWithMetadata(DeepSpeechStreamingState ctx, int num_results) {
public Metadata finishStreamWithMetadata(MozillaVoiceSttStreamingState ctx, int num_results) {
return impl.FinishStreamWithMetadata(ctx.get(), num_results);
}
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше