Revert "Merge branch 'rename-real'"

This reverts commit ae9fdb183e, reversing
changes made to 2eb75b6206.
This commit is contained in:
Reuben Morais 2020-08-25 15:37:58 +02:00
Родитель 386935e1fa
Коммит ae0cf8db6a
183 изменённых файлов: 1497 добавлений и 1330 удалений

Просмотреть файл

@ -1,5 +1,5 @@
This file contains a list of papers in chronological order that have been published
using Mozilla Voice STT.
using Mozilla's DeepSpeech.
To appear
==========

Просмотреть файл

@ -149,12 +149,12 @@ RUN bazel build \
--copt=-msse4.2 \
--copt=-mavx \
--copt=-fvisibility=hidden \
//native_client:libmozilla_voice_stt.so \
//native_client:libdeepspeech.so \
--verbose_failures \
--action_env=LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
# Copy built libs to /DeepSpeech/native_client
RUN cp bazel-bin/native_client/libmozilla_voice_stt.so /DeepSpeech/native_client/
RUN cp bazel-bin/native_client/libdeepspeech.so /DeepSpeech/native_client/
# Build client.cc and install Python client and decoder bindings
ENV TFDIR /DeepSpeech/tensorflow
@ -162,7 +162,7 @@ ENV TFDIR /DeepSpeech/tensorflow
RUN nproc
WORKDIR /DeepSpeech/native_client
RUN make NUM_PROCESSES=$(nproc) mozilla_voice_stt
RUN make NUM_PROCESSES=$(nproc) deepspeech
WORKDIR /DeepSpeech
RUN cd native_client/python && make NUM_PROCESSES=$(nproc) bindings

Просмотреть файл

@ -1,5 +1,5 @@
Mozilla Voice STT
=================
Project DeepSpeech
==================
.. image:: https://readthedocs.org/projects/deepspeech/badge/?version=latest
@ -12,7 +12,7 @@ Mozilla Voice STT
:alt: Task Status
Mozilla Voice STT is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Mozilla Voice STT uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
Documentation for installation, usage, and training models are available on `deepspeech.readthedocs.io <http://deepspeech.readthedocs.io/?badge=latest>`_.

Просмотреть файл

@ -1,12 +1,12 @@
.. _build-native-client:
Building Mozilla Voice STT Binaries
===================================
Building DeepSpeech Binaries
============================
This section describes how to rebuild binaries. We have already several prebuilt binaries for all the supported platform,
it is highly advised to use them except if you know what you are doing.
If you'd like to build the Mozilla Voice STT binaries yourself, you'll need the following pre-requisites downloaded and installed:
If you'd like to build the DeepSpeech binaries yourself, you'll need the following pre-requisites downloaded and installed:
* `Bazel 3.1.0 <https://github.com/bazelbuild/bazel/releases/tag/3.1.0>`_
* `General TensorFlow r2.3 requirements <https://www.tensorflow.org/install/source#tested_build_configurations>`_
@ -26,14 +26,14 @@ If you'd like to build the language bindings or the decoder package, you'll also
Dependencies
------------
If you follow these instructions, you should compile your own binaries of Mozilla Voice STT (built on TensorFlow using Bazel).
If you follow these instructions, you should compile your own binaries of DeepSpeech (built on TensorFlow using Bazel).
For more information on configuring TensorFlow, read the docs up to the end of `"Configure the Build" <https://www.tensorflow.org/install/source#configure_the_build>`_.
Checkout source code
^^^^^^^^^^^^^^^^^^^^
Clone Mozilla Voice STT source code (TensorFlow will come as a submdule):
Clone DeepSpeech source code (TensorFlow will come as a submdule):
.. code-block::
@ -56,24 +56,24 @@ After you have installed the correct version of Bazel, configure TensorFlow:
cd tensorflow
./configure
Compile Mozilla Voice STT
-------------------------
Compile DeepSpeech
------------------
Compile ``libmozilla_voice_stt.so``
Compile ``libdeepspeech.so``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Within your TensorFlow directory, there should be a symbolic link to the Mozilla Voice STT ``native_client`` directory. If it is not present, create it with the follow command:
Within your TensorFlow directory, there should be a symbolic link to the DeepSpeech ``native_client`` directory. If it is not present, create it with the follow command:
.. code-block::
cd tensorflow
ln -s ../native_client
You can now use Bazel to build the main Mozilla Voice STT library, ``libmozilla_voice_stt.so``. Add ``--config=cuda`` if you want a CUDA build.
You can now use Bazel to build the main DeepSpeech library, ``libdeepspeech.so``. Add ``--config=cuda`` if you want a CUDA build.
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so
The generated binaries will be saved to ``bazel-bin/native_client/``.
@ -82,12 +82,12 @@ The generated binaries will be saved to ``bazel-bin/native_client/``.
Compile ``generate_scorer_package``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Following the same setup as for ``libmozilla_voice_stt.so`` above, you can rebuild the ``generate_scorer_package`` binary by adding its target to the command line: ``//native_client:generate_scorer_package``.
Following the same setup as for ``libdeepspeech.so`` above, you can rebuild the ``generate_scorer_package`` binary by adding its target to the command line: ``//native_client:generate_scorer_package``.
Using the example from above you can build the library and that binary at the same time:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so //native_client:generate_scorer_package
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_scorer_package
The generated binaries will be saved to ``bazel-bin/native_client/``.
@ -99,7 +99,7 @@ Now, ``cd`` into the ``DeepSpeech/native_client`` directory and use the ``Makefi
.. code-block::
cd ../DeepSpeech/native_client
make mozilla_voice_stt
make deepspeech
Installing your own Binaries
----------------------------
@ -121,9 +121,9 @@ Included are a set of generated Python bindings. After following the above build
cd native_client/python
make bindings
pip install dist/mozilla_voice_stt*
pip install dist/deepspeech*
The API mirrors the C++ API and is demonstrated in `client.py <python/client.py>`_. Refer to the `C API <c-usage>` for documentation.
The API mirrors the C++ API and is demonstrated in `client.py <python/client.py>`_. Refer to `deepspeech.h <deepspeech.h>`_ for documentation.
Install NodeJS / ElectronJS bindings
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -136,7 +136,7 @@ After following the above build and installation instructions, the Node.JS bindi
make build
make npm-pack
This will create the package ``mozilla_voice_stt-VERSION.tgz`` in ``native_client/javascript``.
This will create the package ``deepspeech-VERSION.tgz`` in ``native_client/javascript``.
Install the CTC decoder package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -165,23 +165,23 @@ So your command line for ``RPi3`` and ``ARMv7`` should look like:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so
And your command line for ``LePotato`` and ``ARM64`` should look like:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libmozilla_voice_stt.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so
While we test only on RPi3 Raspbian Buster and LePotato ARMBian Buster, anything compatible with ``armv7-a cortex-a53`` or ``armv8-a cortex-a53`` should be fine.
The ``mozilla_voice_stt`` binary can also be cross-built, with ``TARGET=rpi3`` or ``TARGET=rpi3-armv8``. This might require you to setup a system tree using the tool ``multistrap`` and the multitrap configuration files: ``native_client/multistrap_armbian64_buster.conf`` and ``native_client/multistrap_raspbian_buster.conf``.
The ``deepspeech`` binary can also be cross-built, with ``TARGET=rpi3`` or ``TARGET=rpi3-armv8``. This might require you to setup a system tree using the tool ``multistrap`` and the multitrap configuration files: ``native_client/multistrap_armbian64_buster.conf`` and ``native_client/multistrap_raspbian_buster.conf``.
The path of the system tree can be overridden from the default values defined in ``definitions.mk`` through the ``RASPBIAN`` ``make`` variable.
.. code-block::
cd ../DeepSpeech/native_client
make TARGET=<system> mozilla_voice_stt
make TARGET=<system> deepspeech
Android devices support
-----------------------
@ -193,9 +193,9 @@ Please refer to TensorFlow documentation on how to setup the environment to buil
Using the library from Android project
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We provide up-to-date and tested STT usable as an ``AAR`` package,
We provide uptodate and tested ``libdeepspeech`` usable as an ``AAR`` package,
for Android versions starting with 7.0 to 11.0. The package is published on
`JCenter <https://bintray.com/alissy/org.mozilla.voice/stt>`_,
`JCenter <https://bintray.com/alissy/org.mozilla.deepspeech/libdeepspeech>`_,
and the ``JCenter`` repository should be available by default in any Android
project. Please make sure your project is setup to pull from this repository.
You can then include the library by just adding this line to your
@ -203,43 +203,43 @@ You can then include the library by just adding this line to your
.. code-block::
implementation 'voice.mozilla.org:stt:VERSION@aar'
implementation 'deepspeech.mozilla.org:libdeepspeech:VERSION@aar'
Building ``libmozilla_voice_stt.so``
Building ``libdeepspeech.so``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can build the ``libmozilla_voice_stt.so`` using (ARMv7):
You can build the ``libdeepspeech.so`` using (ARMv7):
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libmozilla_voice_stt.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
Or (ARM64):
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libmozilla_voice_stt.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
Building ``libmozillavoicestt.aar``
Building ``libdeepspeech.aar``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the unlikely event you have to rebuild the JNI bindings, source code is
available under the ``libmozillavoicestt`` subdirectory. Building depends on shared
object: please ensure to place ``libmozilla_voice_stt.so`` into the
``libmozillavoicestt/libs/{arm64-v8a,armeabi-v7a,x86_64}/`` matching subdirectories.
available under the ``libdeepspeech`` subdirectory. Building depends on shared
object: please ensure to place ``libdeepspeech.so`` into the
``libdeepspeech/libs/{arm64-v8a,armeabi-v7a,x86_64}/`` matching subdirectories.
Building the bindings is managed by ``gradle`` and should be limited to issuing
``./gradlew libmozillavoicestt:build``, producing an ``AAR`` package in
``./libmozillavoicestt/build/outputs/aar/``.
``./gradlew libdeepspeech:build``, producing an ``AAR`` package in
``./libdeepspeech/build/outputs/aar/``.
Please note that you might have to copy the file to a local Maven repository
and adapt file naming (when missing, the error message should states what
filename it expects and where).
Building C++ ``mozilla_voice_stt`` binary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Building C++ ``deepspeech`` binary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Building the ``mozilla_voice_stt`` binary will happen through ``ndk-build`` (ARMv7):
Building the ``deepspeech`` binary will happen through ``ndk-build`` (ARMv7):
.. code-block::
@ -272,13 +272,13 @@ demo of one usage of the application. For example, it's only able to read PCM
mono 16kHz 16-bits file and it might fail on some WAVE file that are not
following exactly the specification.
Running ``mozilla_voice_stt`` via adb
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Running ``deepspeech`` via adb
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You should use ``adb push`` to send data to device, please refer to Android
documentation on how to use that.
Please push Mozilla Voice STT data to ``/sdcard/mozilla_voice_stt/``\ , including:
Please push DeepSpeech data to ``/sdcard/deepspeech/``\ , including:
* ``output_graph.tflite`` which is the TF Lite model
@ -286,18 +286,18 @@ Please push Mozilla Voice STT data to ``/sdcard/mozilla_voice_stt/``\ , includin
the scorer; please be aware that too big scorer will make the device run out
of memory
Then, push binaries from ``native_client.tar.xz`` to ``/data/local/tmp/stt``\ :
Then, push binaries from ``native_client.tar.xz`` to ``/data/local/tmp/ds``\ :
* ``mozilla_voice_stt``
* ``libmozilla_voice_stt.so``
* ``deepspeech``
* ``libdeepspeech.so``
* ``libc++_shared.so``
You should then be able to run as usual, using a shell from ``adb shell``\ :
.. code-block::
user@device$ cd /data/local/tmp/stt/
user@device$ LD_LIBRARY_PATH=$(pwd)/ ./mozilla_voice_stt [...]
user@device$ cd /data/local/tmp/ds/
user@device$ LD_LIBRARY_PATH=$(pwd)/ ./deepspeech [...]
Please note that Android linker does not support ``rpath`` so you have to set
``LD_LIBRARY_PATH``. Properly wrapped / packaged bindings does embed the library

Просмотреть файл

@ -10,59 +10,56 @@ C API
See also the list of error codes including descriptions for each error in :ref:`error-codes`.
.. doxygenfunction:: STT_CreateModel
.. doxygenfunction:: DS_CreateModel
:project: deepspeech-c
.. doxygenfunction:: STT_FreeModel
.. doxygenfunction:: DS_FreeModel
:project: deepspeech-c
.. doxygenfunction:: STT_EnableExternalScorer
.. doxygenfunction:: DS_EnableExternalScorer
:project: deepspeech-c
.. doxygenfunction:: STT_DisableExternalScorer
.. doxygenfunction:: DS_DisableExternalScorer
:project: deepspeech-c
.. doxygenfunction:: STT_SetScorerAlphaBeta
.. doxygenfunction:: DS_SetScorerAlphaBeta
:project: deepspeech-c
.. doxygenfunction:: STT_GetModelSampleRate
.. doxygenfunction:: DS_GetModelSampleRate
:project: deepspeech-c
.. doxygenfunction:: STT_SpeechToText
.. doxygenfunction:: DS_SpeechToText
:project: deepspeech-c
.. doxygenfunction:: STT_SpeechToTextWithMetadata
.. doxygenfunction:: DS_SpeechToTextWithMetadata
:project: deepspeech-c
.. doxygenfunction:: STT_CreateStream
.. doxygenfunction:: DS_CreateStream
:project: deepspeech-c
.. doxygenfunction:: STT_FeedAudioContent
.. doxygenfunction:: DS_FeedAudioContent
:project: deepspeech-c
.. doxygenfunction:: STT_IntermediateDecode
.. doxygenfunction:: DS_IntermediateDecode
:project: deepspeech-c
.. doxygenfunction:: STT_IntermediateDecodeWithMetadata
.. doxygenfunction:: DS_IntermediateDecodeWithMetadata
:project: deepspeech-c
.. doxygenfunction:: STT_FinishStream
.. doxygenfunction:: DS_FinishStream
:project: deepspeech-c
.. doxygenfunction:: STT_FinishStreamWithMetadata
.. doxygenfunction:: DS_FinishStreamWithMetadata
:project: deepspeech-c
.. doxygenfunction:: STT_FreeStream
.. doxygenfunction:: DS_FreeStream
:project: deepspeech-c
.. doxygenfunction:: STT_FreeMetadata
.. doxygenfunction:: DS_FreeMetadata
:project: deepspeech-c
.. doxygenfunction:: STT_FreeString
.. doxygenfunction:: DS_FreeString
:project: deepspeech-c
.. doxygenfunction:: STT_Version
:project: deepspeech-c
.. doxygenfunction:: STT_ErrorCodeToErrorMessage
.. doxygenfunction:: DS_Version
:project: deepspeech-c

Просмотреть файл

@ -6,7 +6,7 @@ CTC beam search decoder
Introduction
^^^^^^^^^^^^
Mozilla Voice STT uses the `Connectionist Temporal Classification <http://www.cs.toronto.edu/~graves/icml_2006.pdf>`_ loss function. For an excellent explanation of CTC and its usage, see this Distill article: `Sequence Modeling with CTC <https://distill.pub/2017/ctc/>`_. This document assumes the reader is familiar with the concepts described in that article, and describes Mozilla Voice STT specific behaviors that developers building systems with Mozilla Voice STT should know to avoid problems.
DeepSpeech uses the `Connectionist Temporal Classification <http://www.cs.toronto.edu/~graves/icml_2006.pdf>`_ loss function. For an excellent explanation of CTC and its usage, see this Distill article: `Sequence Modeling with CTC <https://distill.pub/2017/ctc/>`_. This document assumes the reader is familiar with the concepts described in that article, and describes DeepSpeech specific behaviors that developers building systems with DeepSpeech should know to avoid problems.
Note: Documentation for the tooling for creating custom scorer packages is available in :ref:`scorer-scripts`.
@ -16,19 +16,19 @@ The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "S
External scorer
^^^^^^^^^^^^^^^
Mozilla Voice STT clients support OPTIONAL use of an external language model to improve the accuracy of the predicted transcripts. In the code, command line parameters, and documentation, this is referred to as a "scorer". The scorer is used to compute the likelihood (also called a score, hence the name "scorer") of sequences of words or characters in the output, to guide the decoder towards more likely results. This improves accuracy significantly.
DeepSpeech clients support OPTIONAL use of an external language model to improve the accuracy of the predicted transcripts. In the code, command line parameters, and documentation, this is referred to as a "scorer". The scorer is used to compute the likelihood (also called a score, hence the name "scorer") of sequences of words or characters in the output, to guide the decoder towards more likely results. This improves accuracy significantly.
The use of an external scorer is fully optional. When an external scorer is not specified, Mozilla Voice STT still uses a beam search decoding algorithm, but without any outside scoring.
The use of an external scorer is fully optional. When an external scorer is not specified, DeepSpeech still uses a beam search decoding algorithm, but without any outside scoring.
Currently, the Mozilla Voice STT external scorer is implemented with `KenLM <https://kheafield.com/code/kenlm/>`_, plus some tooling to package the necessary files and metadata into a single ``.scorer`` package. The tooling lives in ``data/lm/``. The scripts included in ``data/lm/`` can be used and modified to build your own language model based on your particular use case or language. See :ref:`scorer-scripts` for more details on how to reproduce our scorer file as well as create your own.
Currently, the DeepSpeech external scorer is implemented with `KenLM <https://kheafield.com/code/kenlm/>`_, plus some tooling to package the necessary files and metadata into a single ``.scorer`` package. The tooling lives in ``data/lm/``. The scripts included in ``data/lm/`` can be used and modified to build your own language model based on your particular use case or language. See :ref:`scorer-scripts` for more details on how to reproduce our scorer file as well as create your own.
The scripts are geared towards replicating the language model files we release as part of `Mozilla Voice STT model releases <https://github.com/mozilla/DeepSpeech/releases/latest>`_, but modifying them to use different datasets or language model construction parameters should be simple.
The scripts are geared towards replicating the language model files we release as part of `DeepSpeech model releases <https://github.com/mozilla/DeepSpeech/releases/latest>`_, but modifying them to use different datasets or language model construction parameters should be simple.
Decoding modes
^^^^^^^^^^^^^^
Mozilla Voice STT currently supports two modes of operation with significant differences at both training and decoding time. Note that Bytes output mode is experimental and has not been tested for languages other than Chinese Mandarin.
DeepSpeech currently supports two modes of operation with significant differences at both training and decoding time. Note that Bytes output mode is experimental and has not been tested for languages other than Chinese Mandarin.
Default mode (alphabet based)

Просмотреть файл

@ -1,5 +1,11 @@
Mozilla Voice STT Acoustic Model
================================
DeepSpeech Model
================
The aim of this project is to create a simple, open, and ubiquitous speech
recognition engine. Simple, in that the engine should not require server-class
hardware to execute. Open, in that the code and models are released under the
Mozilla Public License. Ubiquitous, in that the engine should run on many
platforms and have bindings to many different languages.
The architecture of the engine was originally motivated by that presented in
`Deep Speech: Scaling up end-to-end speech recognition <http://arxiv.org/abs/1412.5567>`_.
@ -71,7 +77,7 @@ with respect to all of the model parameters may be done via back-propagation
through the rest of the network. We use the Adam method for training
`[3] <http://arxiv.org/abs/1412.6980>`_.
The complete LSTM model is illustrated in the figure below.
The complete RNN model is illustrated in the figure below.
.. image:: ../images/rnn_fig-624x598.png
:alt: Mozilla Voice STT LSTM
:alt: DeepSpeech BRNN

Просмотреть файл

@ -2,17 +2,17 @@
==============
MozillaVoiceSttModel Class
--------------------------
DeepSpeech Class
----------------
.. doxygenclass:: MozillaVoiceSttClient::MozillaVoiceSttModel
.. doxygenclass:: DeepSpeechClient::DeepSpeech
:project: deepspeech-dotnet
:members:
MozillaVoiceSttStream Class
---------------------------
DeepSpeechStream Class
----------------------
.. doxygenclass:: MozillaVoiceSttClient::Models::MozillaVoiceSttStream
.. doxygenclass:: DeepSpeechClient::Models::DeepSpeechStream
:project: deepspeech-dotnet
:members:
@ -21,33 +21,33 @@ ErrorCodes
See also the main definition including descriptions for each error in :ref:`error-codes`.
.. doxygenenum:: MozillaVoiceSttClient::Enums::ErrorCodes
.. doxygenenum:: DeepSpeechClient::Enums::ErrorCodes
:project: deepspeech-dotnet
Metadata
--------
.. doxygenclass:: MozillaVoiceSttClient::Models::Metadata
.. doxygenclass:: DeepSpeechClient::Models::Metadata
:project: deepspeech-dotnet
:members: Transcripts
CandidateTranscript
-------------------
.. doxygenclass:: MozillaVoiceSttClient::Models::CandidateTranscript
.. doxygenclass:: DeepSpeechClient::Models::CandidateTranscript
:project: deepspeech-dotnet
:members: Tokens, Confidence
TokenMetadata
-------------
.. doxygenclass:: MozillaVoiceSttClient::Models::TokenMetadata
.. doxygenclass:: DeepSpeechClient::Models::TokenMetadata
:project: deepspeech-dotnet
:members: Text, Timestep, StartTime
IMozillaVoiceSttModel Interface
-------------------------------
DeepSpeech Interface
--------------------
.. doxygeninterface:: MozillaVoiceSttClient::Interfaces::IMozillaVoiceSttModel
.. doxygeninterface:: DeepSpeechClient::Interfaces::IDeepSpeech
:project: deepspeech-dotnet
:members:

Просмотреть файл

@ -1,12 +1,12 @@
.NET API Usage example
======================
Examples are from `native_client/dotnet/MozillaVoiceSttConsole/Program.cs`.
Examples are from `native_client/dotnet/DeepSpeechConsole/Program.cs`.
Creating a model instance and loading model
-------------------------------------------
.. literalinclude:: ../native_client/dotnet/MozillaVoiceSttConsole/Program.cs
.. literalinclude:: ../native_client/dotnet/DeepSpeechConsole/Program.cs
:language: csharp
:linenos:
:lineno-match:
@ -16,7 +16,7 @@ Creating a model instance and loading model
Performing inference
--------------------
.. literalinclude:: ../native_client/dotnet/MozillaVoiceSttConsole/Program.cs
.. literalinclude:: ../native_client/dotnet/DeepSpeechConsole/Program.cs
:language: csharp
:linenos:
:lineno-match:
@ -26,4 +26,4 @@ Performing inference
Full source code
----------------
See :download:`Full source code<../native_client/dotnet/MozillaVoiceSttConsole/Program.cs>`.
See :download:`Full source code<../native_client/dotnet/DeepSpeechConsole/Program.cs>`.

Просмотреть файл

@ -5,7 +5,7 @@ Error codes
Below is the definition for all error codes used in the API, their numerical values, and a human readable description.
.. literalinclude:: ../native_client/mozilla_voice_stt.h
.. literalinclude:: ../native_client/deepspeech.h
:language: c
:start-after: sphinx-doc: error_code_listing_start
:end-before: sphinx-doc: error_code_listing_end

Просмотреть файл

@ -1,29 +1,29 @@
Java
====
MozillaVoiceSttModel
--------------------
DeepSpeechModel
---------------
.. doxygenclass:: org::mozilla::voice::stt::MozillaVoiceSttModel
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::DeepSpeechModel
:project: deepspeech-java
:members:
Metadata
--------
.. doxygenclass:: org::mozilla::voice::stt::Metadata
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::Metadata
:project: deepspeech-java
:members: getNumTranscripts, getTranscript
CandidateTranscript
-------------------
.. doxygenclass:: org::mozilla::voice::stt::CandidateTranscript
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::CandidateTranscript
:project: deepspeech-java
:members: getNumTokens, getConfidence, getToken
TokenMetadata
-------------
.. doxygenclass:: org::mozilla::voice::stt::TokenMetadata
.. doxygenclass:: org::mozilla::deepspeech::libdeepspeech::TokenMetadata
:project: deepspeech-java
:members: getText, getTimestep, getStartTime

Просмотреть файл

@ -1,12 +1,12 @@
Java API Usage example
======================
Examples are from `native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java`.
Examples are from `native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java`.
Creating a model instance and loading model
-------------------------------------------
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java
:language: java
:linenos:
:lineno-match:
@ -16,7 +16,7 @@ Creating a model instance and loading model
Performing inference
--------------------
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java
:language: java
:linenos:
:lineno-match:
@ -26,4 +26,4 @@ Performing inference
Full source code
----------------
See :download:`Full source code<../native_client/java/app/src/main/java/org/mozilla/voice/sttapp/MozillaVoiceSttActivity.java>`.
See :download:`Full source code<../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java>`.

Просмотреть файл

@ -4,7 +4,7 @@
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = Mozilla Voice STT
SPHINXPROJ = DeepSpeech
SOURCEDIR = .
BUILDDIR = .build

Просмотреть файл

@ -1,8 +1,8 @@
Parallel Optimization
=====================
This is how we implement optimization of the Mozilla Voice STT model across GPUs
on a single host. Parallel optimization can take on various forms. For example
This is how we implement optimization of the DeepSpeech model across GPUs on a
single host. Parallel optimization can take on various forms. For example
one can use asynchronous updates of the model, synchronous updates of the model,
or some combination of the two.

Просмотреть файл

@ -9,61 +9,61 @@ Linux / AMD64 without GPU
^^^^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
Linux / AMD64 with GPU
^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Ubuntu 14.04+ (glibc >= 2.19, libstdc++6 >= 4.8)
* CUDA 10.0 (and capable GPU)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
Linux / ARMv7
^^^^^^^^^^^^^
* Cortex-A53 compatible ARMv7 SoC with Neon support
* Raspbian Buster-compatible distribution
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
Linux / Aarch64
^^^^^^^^^^^^^^^
* Cortex-A72 compatible Aarch64 SoC
* ARMbian Buster-compatible distribution
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
Android / ARMv7
^^^^^^^^^^^^^^^
* ARMv7 SoC with Neon support
* Android 7.0-10.0
* NDK API level >= 21
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
Android / Aarch64
^^^^^^^^^^^^^^^^^
* Aarch64 SoC
* Android 7.0-10.0
* NDK API level >= 21
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
macOS / AMD64
^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* macOS >= 10.10
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
Windows / AMD64 without GPU
^^^^^^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Windows Server >= 2012 R2 ; Windows >= 8.1
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)
Windows / AMD64 with GPU
^^^^^^^^^^^^^^^^^^^^^^^^
* x86-64 CPU with AVX/FMA (one can rebuild without AVX/FMA, but it might slow down inference)
* Windows Server >= 2012 R2 ; Windows >= 8.1
* CUDA 10.0 (and capable GPU)
* Full TensorFlow runtime (``mozilla_voice_stt`` packages)
* TensorFlow Lite runtime (``mozilla_voice_stt_tflite`` packages)
* Full TensorFlow runtime (``deepspeech`` packages)
* TensorFlow Lite runtime (``deepspeech-tflite`` packages)

Просмотреть файл

@ -3,7 +3,7 @@
External scorer scripts
=======================
Mozilla Voice STT pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own.
DeepSpeech pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own.
The scorer is composed of two sub-components, a KenLM language model and a trie data structure containing all words in the vocabulary. In order to create the scorer package, first we must create a KenLM language model (using ``data/lm/generate_lm.py``, and then use ``generate_scorer_package`` to create the final package file including the trie data structure.
@ -59,6 +59,6 @@ Building your own scorer can be useful if you're using models in a narrow usage
The LibriSpeech LM training text used by our scorer is around 4GB uncompressed, which should give an idea of the size of a corpus needed for a reasonable language model for general speech recognition. For more constrained use cases with smaller vocabularies, you don't need as much data, but you should still try to gather as much as you can.
With a text corpus in hand, you can then re-use ``generate_lm.py`` and ``generate_scorer_package`` to create your own scorer that is compatible with Mozilla Voice STT clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit <https://kheafield.com/code/kenlm/>`_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior.
With a text corpus in hand, you can then re-use ``generate_lm.py`` and ``generate_scorer_package`` to create your own scorer that is compatible with DeepSpeech clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit <https://kheafield.com/code/kenlm/>`_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior.
After using ``generate_lm.py`` to create a KenLM language model binary file, you can use ``generate_scorer_package`` to create a scorer package as described in the previous section. Note that we have a :github:`lm_optimizer.py script <lm_optimizer.py>` which can be used to find good default values for alpha and beta. To use it, you must first generate a package with any value set for default alpha and beta flags. For this step, it doesn't matter what values you use, as they'll be overridden by ``lm_optimizer.py`` later. Then, use ``lm_optimizer.py`` with this scorer file to find good alpha and beta values. Finally, use ``generate_scorer_package`` again, this time with the new values.

Просмотреть файл

@ -12,7 +12,7 @@ Prerequisites for training a model
Getting the training code
^^^^^^^^^^^^^^^^^^^^^^^^^
Clone the Mozilla Voice STT repository:
Clone the DeepSpeech repository:
.. code-block:: bash
@ -21,25 +21,25 @@ Clone the Mozilla Voice STT repository:
Creating a virtual environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run Mozilla Voice STT. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/stt-train-venv``. You can create it using this command:
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-train-venv``. You can create it using this command:
.. code-block::
$ python3 -m venv $HOME/tmp/stt-train-venv/
$ python3 -m venv $HOME/tmp/deepspeech-train-venv/
Once this command completes successfully, the environment will be ready to be activated.
Activating the environment
^^^^^^^^^^^^^^^^^^^^^^^^^^
Each time you need to work with Mozilla Voice STT, you have to *activate* this virtual environment. This is done with this simple command:
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
.. code-block::
$ source $HOME/tmp/stt-train-venv/bin/activate
$ source $HOME/tmp/deepspeech-train-venv/bin/activate
Installing Mozilla Voice STT Training Code and its dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Installing DeepSpeech Training Code and its dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install the required dependencies using ``pip3``\ :
@ -88,7 +88,7 @@ This should ensure that you'll re-use the upstream Python 3 TensorFlow GPU-enabl
make Dockerfile.train
If you want to specify a different Mozilla Voice STT repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
If you want to specify a different DeepSpeech repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
.. code-block:: bash
@ -105,7 +105,7 @@ After extraction of such a data set, you'll find the following contents:
* the ``*.tsv`` files output by CorporaCreator for the downloaded language
* the mp3 audio files they reference in a ``clips`` sub-directory.
For bringing this data into a form that Mozilla Voice STT understands, you have to run the CommonVoice v2.0 importer (\ ``bin/import_cv2.py``\ ):
For bringing this data into a form that DeepSpeech understands, you have to run the CommonVoice v2.0 importer (\ ``bin/import_cv2.py``\ ):
.. code-block:: bash
@ -150,7 +150,7 @@ For executing pre-configured training scenarios, there is a collection of conven
**If you experience GPU OOM errors while training, try reducing the batch size with the ``--train_batch_size``\ , ``--dev_batch_size`` and ``--test_batch_size`` parameters.**
As a simple first example you can open a terminal, change to the directory of the Mozilla Voice STT checkout, activate the virtualenv created above, and run:
As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout, activate the virtualenv created above, and run:
.. code-block:: bash
@ -160,7 +160,7 @@ This script will train on a small sample dataset composed of just a single audio
Feel also free to pass additional (or overriding) ``DeepSpeech.py`` parameters to these scripts. Then, just run the script to train the modified network.
Each dataset has a corresponding importer script in ``bin/`` that can be used to download (if it's freely available) and preprocess the dataset. See ``bin/import_librivox.py`` for an example of how to import and preprocess a large dataset for training with Mozilla Voice STT.
Each dataset has a corresponding importer script in ``bin/`` that can be used to download (if it's freely available) and preprocess the dataset. See ``bin/import_librivox.py`` for an example of how to import and preprocess a large dataset for training with DeepSpeech.
Some importers might require additional code to properly handled your locale-specific requirements. Such handling is dealt with ``--validate_label_locale`` flag that allows you to source out-of-tree Python script that defines a ``validate_label`` function. Please refer to ``util/importers.py`` for implementation example of that function.
If you don't provide this argument, the default ``validate_label`` function will be used. This one is only intended for English language, so you might have consistency issues in your data for other languages.
@ -187,7 +187,7 @@ Mixed precision training makes use of both FP32 and FP16 precisions where approp
python3 DeepSpeech.py --train_files ./train.csv --dev_files ./dev.csv --test_files ./test.csv --automatic_mixed_precision
```
On a Volta generation V100 GPU, automatic mixed precision speeds up Mozilla Voice STT training and evaluation by ~30%-40%.
On a Volta generation V100 GPU, automatic mixed precision speeds up DeepSpeech training and evaluation by ~30%-40%.
Checkpointing
^^^^^^^^^^^^^
@ -229,9 +229,9 @@ Upon sucessfull run, it should report about conversion of a non-zero number of n
Continuing training from a release model
----------------------------------------
There are currently two supported approaches to make use of a pre-trained Mozilla Voice STT model: fine-tuning or transfer-learning. Choosing which one to use is a simple decision, and it depends on your target dataset. Does your data use the same alphabet as the release model? If "Yes": fine-tune. If "No" use transfer-learning.
There are currently two supported approaches to make use of a pre-trained DeepSpeech model: fine-tuning or transfer-learning. Choosing which one to use is a simple decision, and it depends on your target dataset. Does your data use the same alphabet as the release model? If "Yes": fine-tune. If "No" use transfer-learning.
If your own data uses the *extact* same alphabet as the English release model (i.e. `a-z` plus `'`) then the release model's output layer will match your data, and you can just fine-tune the existing parameters. However, if you want to use a new alphabet (e.g. Cyrillic `а`, `б`, `д`), the output layer of a release Mozilla Voice STT model will *not* match your data. In this case, you should use transfer-learning (i.e. remove the trained model's output layer, and reinitialize a new output layer that matches your target character set.
If your own data uses the *extact* same alphabet as the English release model (i.e. `a-z` plus `'`) then the release model's output layer will match your data, and you can just fine-tune the existing parameters. However, if you want to use a new alphabet (e.g. Cyrillic `а`, `б`, `д`), the output layer of a release DeepSpeech model will *not* match your data. In this case, you should use transfer-learning (i.e. remove the trained model's output layer, and reinitialize a new output layer that matches your target character set.
N.B. - If you have access to a pre-trained model which uses UTF-8 bytes at the output layer you can always fine-tune, because any alphabet should be encodable as UTF-8.
@ -263,11 +263,11 @@ If you try to load a release model without following these steps, you'll get an
Transfer-Learning (new alphabet)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to continue training an alphabet-based Mozilla Voice STT model (i.e. not a UTF-8 model) on a new language, or if you just want to add new characters to your custom alphabet, you will probably want to use transfer-learning instead of fine-tuning. If you're starting with a pre-trained UTF-8 model -- even if your data comes from a different language or uses a different alphabet -- the model will be able to predict your new transcripts, and you should use fine-tuning instead.
If you want to continue training an alphabet-based DeepSpeech model (i.e. not a UTF-8 model) on a new language, or if you just want to add new characters to your custom alphabet, you will probably want to use transfer-learning instead of fine-tuning. If you're starting with a pre-trained UTF-8 model -- even if your data comes from a different language or uses a different alphabet -- the model will be able to predict your new transcripts, and you should use fine-tuning instead.
In a nutshell, Mozilla Voice STT's transfer-learning allows you to remove certain layers from a pre-trained model, initialize new layers for your target data, stitch together the old and new layers, and update all layers via gradient descent. You will remove the pre-trained output layer (and optionally more layers) and reinitialize parameters to fit your target alphabet. The simplest case of transfer-learning is when you remove just the output layer.
In a nutshell, DeepSpeech's transfer-learning allows you to remove certain layers from a pre-trained model, initialize new layers for your target data, stitch together the old and new layers, and update all layers via gradient descent. You will remove the pre-trained output layer (and optionally more layers) and reinitialize parameters to fit your target alphabet. The simplest case of transfer-learning is when you remove just the output layer.
In Mozilla Voice STT's implementation of transfer-learning, all removed layers will be contiguous, starting from the output layer. The key flag you will want to experiment with is ``--drop_source_layers``. This flag accepts an integer from ``1`` to ``5`` and allows you to specify how many layers you want to remove from the pre-trained model. For example, if you supplied ``--drop_source_layers 3``, you will drop the last three layers of the pre-trained model: the output layer, penultimate layer, and LSTM layer. All dropped layers will be reinintialized, and (crucially) the output layer will be defined to match your supplied target alphabet.
In DeepSpeech's implementation of transfer-learning, all removed layers will be contiguous, starting from the output layer. The key flag you will want to experiment with is ``--drop_source_layers``. This flag accepts an integer from ``1`` to ``5`` and allows you to specify how many layers you want to remove from the pre-trained model. For example, if you supplied ``--drop_source_layers 3``, you will drop the last three layers of the pre-trained model: the output layer, penultimate layer, and LSTM layer. All dropped layers will be reinintialized, and (crucially) the output layer will be defined to match your supplied target alphabet.
You need to specify the location of the pre-trained model with ``--load_checkpoint_dir`` and define where your new model checkpoints will be saved with ``--save_checkpoint_dir``. You need to specify how many layers to remove (aka "drop") from the pre-trained model: ``--drop_source_layers``. You also need to supply your new alphabet file using the standard ``--alphabet_config_path`` (remember, using a new alphabet is the whole reason you want to use transfer-learning).
@ -285,7 +285,8 @@ You need to specify the location of the pre-trained model with ``--load_checkpoi
UTF-8 mode
^^^^^^^^^^
Mozilla Voice STT includes a UTF-8 operating mode which can be useful to model languages with very large alphabets, such as Chinese Mandarin. For details on how it works and how to use it, see :ref:`decoder-docs`.
DeepSpeech includes a UTF-8 operating mode which can be useful to model languages with very large alphabets, such as Chinese Mandarin. For details on how it works and how to use it, see :ref:`decoder-docs`.
.. _training-data-augmentation:

Просмотреть файл

@ -3,7 +3,7 @@
Using a Pre-trained Model
=========================
Inference using a Mozilla Voice STT pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed `further down in this README <#third-party-bindings>`_.
Inference using a DeepSpeech pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed `further down in this README <#third-party-bindings>`_.
* :ref:`The C API <c-usage>`.
* :ref:`The Python package/language binding <py-usage>`
@ -13,7 +13,7 @@ Inference using a Mozilla Voice STT pre-trained model can be done with a client/
.. _runtime-deps:
Running ``mozilla_voice_stt`` might, see below, require some runtime dependencies to be already installed on your system:
Running ``deepspeech`` might, see below, require some runtime dependencies to be already installed on your system:
* ``sox`` - The Python and Node.JS clients use SoX to resample files to 16kHz.
* ``libgomp1`` - libsox (statically linked into the clients) depends on OpenMP. Some people have had to install this manually.
@ -28,29 +28,29 @@ Please refer to your system's documentation on how to install these dependencies
CUDA dependency
^^^^^^^^^^^^^^^
The CUDA capable builds (Python, NodeJS, C++, etc) depend on CUDA 10.1 and CuDNN v7.6.
The GPU capable builds (Python, NodeJS, C++, etc) depend on CUDA 10.1 and CuDNN v7.6.
Getting the pre-trained model
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the Mozilla Voice STT `releases page <https://github.com/mozilla/DeepSpeech/releases>`_. Alternatively, you can run the following command to download the model files in your current directory:
If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech `releases page <https://github.com/mozilla/DeepSpeech/releases>`_. Alternatively, you can run the following command to download the model files in your current directory:
.. code-block:: bash
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.pbmm
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.scorer
There are several pre-trained model files available in official releases. Files ending in ``.pbmm`` are compatible with clients and language bindings built against the standard TensorFlow runtime. Usually these packages are simply called ``mozilla_voice_stt``. These files are also compatible with CUDA enabled clients and language bindings. These packages are usually called ``mozilla_voice_stt_cuda``. Files ending in ``.tflite`` are compatible with clients and language bindings built against the `TensorFlow Lite runtime <https://www.tensorflow.org/lite/>`_. These models are optimized for size and performance in low power devices. On desktop platforms, the compatible packages are called ``mozilla_voice_stt_tflite``. On Android and Raspberry Pi, we only publish TensorFlow Lite enabled packages, and they are simply called ``mozilla_voice_stt``. You can see a full list of supported platforms and which TensorFlow runtime is supported at :ref:`supported-platforms-inference`.
There are several pre-trained model files available in official releases. Files ending in ``.pbmm`` are compatible with clients and language bindings built against the standard TensorFlow runtime. Usually these packages are simply called ``deepspeech``. These files are also compatible with CUDA enabled clients and language bindings. These packages are usually called ``deepspeech-gpu``. Files ending in ``.tflite`` are compatible with clients and language bindings built against the `TensorFlow Lite runtime <https://www.tensorflow.org/lite/>`_. These models are optimized for size and performance in low power devices. On desktop platforms, the compatible packages are called ``deepspeech-tflite``. On Android and Raspberry Pi, we only publish TensorFlow Lite enabled packages, and they are simply called ``deepspeech``. You can see a full list of supported platforms and which TensorFlow runtime is supported at :ref:`supported-platforms-inference`.
+--------------------------+---------------------+---------------------+
| Package/Model type | .pbmm | .tflite |
+==========================+=====================+=====================+
| mozilla_voice_stt | Depends on platform | Depends on platform |
+--------------------------+---------------------+---------------------+
| mozilla_voice_stt_cuda | ✅ | ❌ |
+--------------------------+---------------------+---------------------+
| mozilla_voice_stt_tflite | ❌ | ✅ |
+--------------------------+---------------------+---------------------+
+--------------------+---------------------+---------------------+
| Package/Model type | .pbmm | .tflite |
+====================+=====================+=====================+
| deepspeech | Depends on platform | Depends on platform |
+--------------------+---------------------+---------------------+
| deepspeech-gpu | ✅ | ❌ |
+--------------------+---------------------+---------------------+
| deepspeech-tflite | ❌ | ✅ |
+--------------------+---------------------+---------------------+
Finally, the pre-trained model files also include files ending in ``.scorer``. These are external scorers (language models) that are used at inference time in conjunction with an acoustic model (``.pbmm`` or ``.tflite`` file) to produce transcriptions. We also provide further documentation on :ref:`the decoding process <decoder-docs>` and :ref:`how scorers are generated <scorer-scripts>`.
@ -61,82 +61,82 @@ The release notes include detailed information on how the released models were t
The process for training an acoustic model is described in :ref:`training-docs`. In particular, fine tuning a release model using your own data can be a good way to leverage relatively smaller amounts of data that would not be sufficient for training a new model from scratch. See the :ref:`fine tuning and transfer learning sections <training-fine-tuning>` for more information. :ref:`Data augmentation <training-data-augmentation>` can also be a good way to increase the value of smaller training sets.
Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in :ref:`scorer-scripts` and an overview of how the external scorer is used by Mozilla Voice STT to perform inference is available in :ref:`decoder-docs`. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications.
Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in :ref:`scorer-scripts` and an overview of how the external scorer is used by DeepSpeech to perform inference is available in :ref:`decoder-docs`. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications.
Model compatibility
^^^^^^^^^^^^^^^^^^^
Mozilla Voice STT models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it.
DeepSpeech models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it.
.. _py-usage:
Using the Python package
^^^^^^^^^^^^^^^^^^^^^^^^
Pre-built binaries which can be used for performing inference with a trained model can be installed with ``pip3``. You can then use the ``mozilla_voice_stt`` binary to do speech-to-text on an audio file:
Pre-built binaries which can be used for performing inference with a trained model can be installed with ``pip3``. You can then use the ``deepspeech`` binary to do speech-to-text on an audio file:
For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in `this documentation <http://docs.python-guide.org/en/latest/dev/virtualenvs/>`_.
We will continue under the assumption that you already have your system properly setup to create new virtual environments.
Create a Mozilla Voice STT virtual environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create a DeepSpeech virtual environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run Mozilla Voice STT. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/stt-venv``. You can create it using this command:
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-venv``. You can create it using this command:
.. code-block::
$ virtualenv -p python3 $HOME/tmp/stt-venv/
$ virtualenv -p python3 $HOME/tmp/deepspeech-venv/
Once this command completes successfully, the environment will be ready to be activated.
Activating the environment
~~~~~~~~~~~~~~~~~~~~~~~~~~
Each time you need to work with Mozilla Voice STT, you have to *activate* this virtual environment. This is done with this simple command:
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
.. code-block::
$ source $HOME/tmp/stt-venv/bin/activate
$ source $HOME/tmp/deepspeech-venv/bin/activate
Installing Mozilla Voice STT Python bindings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Installing DeepSpeech Python bindings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once your environment has been set-up and loaded, you can use ``pip3`` to manage packages locally. On a fresh setup of the ``virtualenv``\ , you will have to install the Mozilla Voice STT wheel. You can check if ``mozilla_voice_stt`` is already installed with ``pip3 list``.
Once your environment has been set-up and loaded, you can use ``pip3`` to manage packages locally. On a fresh setup of the ``virtualenv``\ , you will have to install the DeepSpeech wheel. You can check if ``deepspeech`` is already installed with ``pip3 list``.
To perform the installation, just use ``pip3`` as such:
.. code-block::
$ pip3 install mozilla_voice_stt
$ pip3 install deepspeech
If ``mozilla_voice_stt`` is already installed, you can update it as such:
If ``deepspeech`` is already installed, you can update it as such:
.. code-block::
$ pip3 install --upgrade mozilla_voice_stt
$ pip3 install --upgrade deepspeech
Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the CUDA specific package as follows:
Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the GPU specific package as follows:
.. code-block::
$ pip3 install mozilla_voice_stt_cuda
$ pip3 install deepspeech-gpu
See the `release notes <https://github.com/mozilla/DeepSpeech/releases>`_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
You can update ``mozilla_voice_stt_cuda`` as follows:
You can update ``deepspeech-gpu`` as follows:
.. code-block::
$ pip3 install --upgrade mozilla_voice_stt_cuda
$ pip3 install --upgrade deepspeech-gpu
In both cases, ``pip3`` should take care of installing all the required dependencies. After installation has finished, you should be able to call ``mozilla_voice_stt`` from the command-line.
In both cases, ``pip3`` should take care of installing all the required dependencies. After installation has finished, you should be able to call ``deepspeech`` from the command-line.
Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.
.. code-block:: bash
mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio my_audio_file.wav
deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio my_audio_file.wav
The ``--scorer`` argument is optional, and represents an external language model to be used when transcribing the audio.
@ -151,7 +151,7 @@ You can download the JS bindings using ``npm``\ :
.. code-block:: bash
npm install mozilla_voice_stt
npm install deepspeech
Please note that as of now, we support:
- Node.JS versions 4 to 13.
@ -159,11 +159,11 @@ Please note that as of now, we support:
TypeScript support is also provided.
Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the CUDA specific package as follows:
Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the GPU specific package as follows:
.. code-block:: bash
npm install mozilla_voice_stt_cuda
npm install deepspeech-gpu
See the `release notes <https://github.com/mozilla/DeepSpeech/releases>`_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
@ -174,7 +174,7 @@ See the :ref:`TypeScript client <js-api-example>` for an example of how to use t
Using the command-line client
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To download the pre-built binaries for the ``mozilla_voice_stt`` command-line (compiled C++) client, use ``util/taskcluster.py``\ :
To download the pre-built binaries for the ``deepspeech`` command-line (compiled C++) client, use ``util/taskcluster.py``\ :
.. code-block:: bash
@ -192,7 +192,7 @@ also, if you need some binaries different than current master, like ``v0.2.0-alp
python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "."
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``mozilla_voice_stt`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of Mozilla Voice STT or TensorFlow can be specified as well.
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``deepspeech`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.
Alternatively you may manually download the ``native_client.tar.xz`` from the [releases](https://github.com/mozilla/DeepSpeech/releases).
@ -200,9 +200,9 @@ Note: the following command assumes you `downloaded the pre-trained model <#gett
.. code-block:: bash
./mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio_input.wav
./deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio_input.wav
See the help output with ``./mozilla_voice_stt -h`` for more details.
See the help output with ``./deepspeech -h`` for more details.
Installing bindings from source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -212,14 +212,14 @@ If pre-built binaries aren't available for your system, you'll need to install t
Dockerfile for building from source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We provide ``Dockerfile.build`` to automatically build ``libmozilla_voice_stt.so``, the C++ native client, Python bindings, and KenLM.
We provide ``Dockerfile.build`` to automatically build ``libdeepspeech.so``, the C++ native client, Python bindings, and KenLM.
You need to generate the Dockerfile from the template using:
.. code-block:: bash
make Dockerfile.build
If you want to specify a different Mozilla Voice STT repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
If you want to specify a different DeepSpeech repository / branch, you can pass ``DEEPSPEECH_REPO`` or ``DEEPSPEECH_SHA`` parameters:
.. code-block:: bash

Просмотреть файл

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
#
# Mozilla Voice STT documentation build configuration file, created by
# DeepSpeech documentation build configuration file, created by
# sphinx-quickstart on Thu Feb 2 21:20:39 2017.
#
# This file is execfile()d with the current directory set to its
@ -24,7 +24,7 @@ import sys
sys.path.insert(0, os.path.abspath('../'))
autodoc_mock_imports = ['mozilla_voice_stt']
autodoc_mock_imports = ['deepspeech']
# This is in fact only relevant on ReadTheDocs, but we want to run the same way
# on our CI as in RTD to avoid regressions on RTD that we would not catch on
@ -41,7 +41,7 @@ import semver
# -- Project information -----------------------------------------------------
project = u'Mozilla Voice STT'
project = u'DeepSpeech'
copyright = '2019-2020, Mozilla Corporation'
author = 'Mozilla Corporation'
@ -143,7 +143,7 @@ html_static_path = ['.static']
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'sttdoc'
htmlhelp_basename = 'DeepSpeechdoc'
# -- Options for LaTeX output ---------------------------------------------
@ -170,7 +170,7 @@ latex_elements = {
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'Mozilla_Voice_STT.tex', u'Mozilla Voice STT Documentation',
(master_doc, 'DeepSpeech.tex', u'DeepSpeech Documentation',
u'Mozilla Research', 'manual'),
]
@ -180,7 +180,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'mozilla_voice_stt', u'Mozilla Voice STT Documentation',
(master_doc, 'deepspeech', u'DeepSpeech Documentation',
[author], 1)
]
@ -191,8 +191,8 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'Mozilla Voice STT', u'Mozilla Voice STT Documentation',
author, 'Mozilla Voice STT', 'One line description of project.',
(master_doc, 'DeepSpeech', u'DeepSpeech Documentation',
author, 'DeepSpeech', 'One line description of project.',
'Miscellaneous'),
]

Просмотреть файл

@ -790,7 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = native_client/mozilla_voice_stt.h
INPUT = native_client/deepspeech.h
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

Просмотреть файл

@ -790,7 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = native_client/dotnet/MozillaVoiceSttClient/ native_client/dotnet/MozillaVoiceSttClient/Interfaces/ native_client/dotnet/MozillaVoiceSttClient/Enums/ native_client/dotnet/MozillaVoiceSttClient/Models/
INPUT = native_client/dotnet/DeepSpeechClient/ native_client/dotnet/DeepSpeechClient/Interfaces/ native_client/dotnet/DeepSpeechClient/Enums/ native_client/dotnet/DeepSpeechClient/Models/
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

Просмотреть файл

@ -790,7 +790,7 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
INPUT = native_client/java/libmozillavoicestt/src/main/java/org/mozilla/voice/stt/ native_client/java/libmozillavoicestt/src/main/java/org/mozilla/voice/stt_doc/
INPUT = native_client/java/libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech/ native_client/java/libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech_doc/
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

Просмотреть файл

@ -1,23 +1,23 @@
.. Mozilla Voice STT documentation master file, created by
.. DeepSpeech documentation master file, created by
sphinx-quickstart on Thu Feb 2 21:20:39 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Mozilla Voice STT's documentation!
Welcome to DeepSpeech's documentation!
======================================
Mozilla Voice STT is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project Mozilla Voice STT uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
To install and use Mozilla Voice STT all you have to do is:
To install and use DeepSpeech all you have to do is:
.. code-block:: bash
# Create and activate a virtualenv
virtualenv -p python3 $HOME/tmp/stt-venv/
source $HOME/tmp/stt-venv/bin/activate
virtualenv -p python3 $HOME/tmp/deepspeech-venv/
source $HOME/tmp/deepspeech-venv/bin/activate
# Install Mozilla Voice STT
pip3 install mozilla_voice_stt
# Install DeepSpeech
pip3 install deepspeech
# Download pre-trained English model files
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.pbmm
@ -28,27 +28,27 @@ To install and use Mozilla Voice STT all you have to do is:
tar xvf audio-0.7.4.tar.gz
# Transcribe an audio file
mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
A pre-trained English model is available for use and can be downloaded following the instructions in :ref:`the usage docs <usage-docs>`. For the latest release, including pre-trained models and checkpoints, `see the GitHub releases page <https://github.com/mozilla/DeepSpeech/releases/latest>`_.
Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_ to find which GPUs are supported. To run ``mozilla_voice_stt`` on a GPU, install the GPU specific package:
Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_ to find which GPUs are supported. To run ``deepspeech`` on a GPU, install the GPU specific package:
.. code-block:: bash
# Create and activate a virtualenv
virtualenv -p python3 $HOME/tmp/stt-gpu-venv/
source $HOME/tmp/stt-gpu-venv/bin/activate
virtualenv -p python3 $HOME/tmp/deepspeech-gpu-venv/
source $HOME/tmp/deepspeech-gpu-venv/bin/activate
# Install Mozilla Voice STT CUDA enabled package
pip3 install mozilla_voice_stt_cuda
# Install DeepSpeech CUDA enabled package
pip3 install deepspeech-gpu
# Transcribe an audio file.
mozilla_voice_stt --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
deepspeech --model deepspeech-0.7.4-models.pbmm --scorer deepspeech-0.7.4-models.scorer --audio audio/2830-3980-0043.wav
Please ensure you have the required :ref:`CUDA dependencies <cuda-deps>`.
See the output of ``mozilla_voice_stt -h`` for more information on the use of ``mozilla_voice_stt``. (If you experience problems running ``mozilla_voice_stt``, please check :ref:`required runtime dependencies <runtime-deps>`).
See the output of ``deepspeech -h`` for more information on the use of ``deepspeech``. (If you experience problems running ``deepspeech``, please check :ref:`required runtime dependencies <runtime-deps>`).
.. toctree::
:maxdepth: 2
@ -76,7 +76,7 @@ See the output of ``mozilla_voice_stt -h`` for more information on the use of ``
:maxdepth: 2
:caption: Architecture and training
AcousticModel
DeepSpeech
Geometry

Просмотреть файл

@ -10,7 +10,7 @@ import csv
import os
import sys
from mozilla_voice_stt import Model
from deepspeech import Model
from deepspeech_training.util.evaluate_tools import calculate_and_print_report
from deepspeech_training.util.flags import create_flags
from functools import partial
@ -19,8 +19,11 @@ from six.moves import zip, range
r'''
This module should be self-contained:
- build libdeepspeech.so with TFLite:
- bazel build [...] --define=runtime=tflite [...] //native_client:libdeepspeech.so
- make -C native_client/python/ TFDIR=... bindings
- setup a virtualenv
- pip install mozilla_voice_stt_tflite
- pip install native_client/python/dist/deepspeech*.whl
- pip install -r requirements_eval_tflite.txt
Then run with a TF Lite model, a scorer and a CSV test file

Просмотреть файл

@ -1,6 +1,6 @@
Examples
========
Mozilla Voice STT examples were moved to a separate repository.
DeepSpeech examples were moved to a separate repository.
New location: https://github.com/mozilla/DeepSpeech-examples

Просмотреть файл

@ -1,14 +1,14 @@
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := mozilla_voice_stt-prebuilt
LOCAL_SRC_FILES := $(TFDIR)/bazel-bin/native_client/libmozilla_voice_stt.so
LOCAL_MODULE := deepspeech-prebuilt
LOCAL_SRC_FILES := $(TFDIR)/bazel-bin/native_client/libdeepspeech.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_CPP_EXTENSION := .cc .cxx .cpp
LOCAL_MODULE := mozilla_voice_stt
LOCAL_MODULE := deepspeech
LOCAL_SRC_FILES := client.cc
LOCAL_SHARED_LIBRARIES := mozilla_voice_stt-prebuilt
LOCAL_SHARED_LIBRARIES := deepspeech-prebuilt
LOCAL_LDFLAGS := -Wl,--no-as-needed
include $(BUILD_EXECUTABLE)

Просмотреть файл

@ -110,10 +110,10 @@ cc_library(
)
tf_cc_shared_object(
name = "libmozilla_voice_stt.so",
name = "libdeepspeech.so",
srcs = [
"deepspeech.cc",
"mozilla_voice_stt.h",
"deepspeech.h",
"deepspeech_errors.cc",
"modelstate.cc",
"modelstate.h",
@ -163,7 +163,7 @@ tf_cc_shared_object(
#"//tensorflow/core:all_kernels",
### => Trying to be more fine-grained
### Use bin/ops_in_graph.py to list all the ops used by a frozen graph.
### CPU only build, libmozilla_voice_stt.so file size reduced by ~50%
### CPU only build, libdeepspeech.so file size reduced by ~50%
"//tensorflow/core/kernels:spectrogram_op", # AudioSpectrogram
"//tensorflow/core/kernels:bias_op", # BiasAdd
"//tensorflow/core/kernels:cast_op", # Cast
@ -203,11 +203,11 @@ tf_cc_shared_object(
)
genrule(
name = "libmozilla_voice_stt_so_dsym",
srcs = [":libmozilla_voice_stt.so"],
outs = ["libmozilla_voice_stt.so.dSYM"],
name = "libdeepspeech_so_dsym",
srcs = [":libdeepspeech.so"],
outs = ["libdeepspeech.so.dSYM"],
output_to_bindir = True,
cmd = "dsymutil $(location :libmozilla_voice_stt.so) -o $@"
cmd = "dsymutil $(location :libdeepspeech.so) -o $@"
)
cc_binary(

Просмотреть файл

@ -1,5 +1,5 @@
This file contains some notes on coding style within the C++ portion of the
Mozilla Voice STT project. It is very much a work in progress and incomplete.
DeepSpeech project. It is very much a work in progress and incomplete.
General
=======

Просмотреть файл

@ -16,32 +16,32 @@ include definitions.mk
default: $(DEEPSPEECH_BIN)
clean:
rm -f $(DEEPSPEECH_BIN)
rm -f deepspeech
$(DEEPSPEECH_BIN): client.cc Makefile
$(CXX) $(CFLAGS) $(CFLAGS_DEEPSPEECH) $(SOX_CFLAGS) client.cc $(LDFLAGS) $(SOX_LDFLAGS)
ifeq ($(OS),Darwin)
install_name_tool -change bazel-out/local-opt/bin/native_client/libmozilla_voice_stt.so @rpath/libmozilla_voice_stt.so $(DEEPSPEECH_BIN)
install_name_tool -change bazel-out/local-opt/bin/native_client/libdeepspeech.so @rpath/libdeepspeech.so deepspeech
endif
run: $(DEEPSPEECH_BIN)
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} ./$(DEEPSPEECH_BIN) ${ARGS}
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} ./deepspeech ${ARGS}
debug: $(DEEPSPEECH_BIN)
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} gdb --args ./$(DEEPSPEECH_BIN) ${ARGS}
${META_LD_LIBRARY_PATH}=${TFDIR}/bazel-bin/native_client:${${META_LD_LIBRARY_PATH}} gdb --args ./deepspeech ${ARGS}
install: $(DEEPSPEECH_BIN)
install -d ${PREFIX}/lib
install -m 0644 ${TFDIR}/bazel-bin/native_client/libmozilla_voice_stt.so ${PREFIX}/lib/
install -m 0644 ${TFDIR}/bazel-bin/native_client/libdeepspeech.so ${PREFIX}/lib/
install -d ${PREFIX}/include
install -m 0644 mozilla_voice_stt.h ${PREFIX}/include
install -m 0644 deepspeech.h ${PREFIX}/include
install -d ${PREFIX}/bin
install -m 0755 $(DEEPSPEECH_BIN) ${PREFIX}/bin/
install -m 0755 deepspeech ${PREFIX}/bin/
uninstall:
rm -f ${PREFIX}/bin/$(DEEPSPEECH_BIN)
rm -f ${PREFIX}/bin/deepspeech
rmdir --ignore-fail-on-non-empty ${PREFIX}/bin
rm -f ${PREFIX}/lib/libmozilla_voice_stt.so
rm -f ${PREFIX}/lib/libdeepspeech.so
rmdir --ignore-fail-on-non-empty ${PREFIX}/lib
print-toolchain:

Просмотреть файл

@ -8,7 +8,7 @@
#endif
#include <iostream>
#include "mozilla_voice_stt.h"
#include "deepspeech.h"
char* model = NULL;
@ -43,7 +43,7 @@ void PrintHelp(const char* bin)
std::cout <<
"Usage: " << bin << " --model MODEL [--scorer SCORER] --audio AUDIO [-t] [-e]\n"
"\n"
"Running Mozilla Voice STT inference.\n"
"Running DeepSpeech inference.\n"
"\n"
"\t--model MODEL\t\t\tPath to the model (protocol buffer binary file)\n"
"\t--scorer SCORER\t\t\tPath to the external scorer file\n"
@ -58,9 +58,9 @@ void PrintHelp(const char* bin)
"\t--stream size\t\t\tRun in stream mode, output intermediate results\n"
"\t--help\t\t\t\tShow help\n"
"\t--version\t\t\tPrint version and exits\n";
char* version = STT_Version();
std::cerr << "Mozilla Voice STT " << version << "\n";
STT_FreeString(version);
char* version = DS_Version();
std::cerr << "DeepSpeech " << version << "\n";
DS_FreeString(version);
exit(1);
}
@ -153,9 +153,9 @@ bool ProcessArgs(int argc, char** argv)
}
if (has_versions) {
char* version = STT_Version();
std::cout << "Mozilla Voice STT " << version << "\n";
STT_FreeString(version);
char* version = DS_Version();
std::cout << "DeepSpeech " << version << "\n";
DS_FreeString(version);
return false;
}

Просмотреть файл

@ -34,7 +34,7 @@
#endif // NO_DIR
#include <vector>
#include "mozilla_voice_stt.h"
#include "deepspeech.h"
#include "args.h"
typedef struct {
@ -168,17 +168,17 @@ LocalDsSTT(ModelState* aCtx, const short* aBuffer, size_t aBufferSize,
// sphinx-doc: c_ref_inference_start
if (extended_output) {
Metadata *result = STT_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, 1);
Metadata *result = DS_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, 1);
res.string = CandidateTranscriptToString(&result->transcripts[0]);
STT_FreeMetadata(result);
DS_FreeMetadata(result);
} else if (json_output) {
Metadata *result = STT_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, json_candidate_transcripts);
Metadata *result = DS_SpeechToTextWithMetadata(aCtx, aBuffer, aBufferSize, json_candidate_transcripts);
res.string = MetadataToJSON(result);
STT_FreeMetadata(result);
DS_FreeMetadata(result);
} else if (stream_size > 0) {
StreamingState* ctx;
int status = STT_CreateStream(aCtx, &ctx);
if (status != STT_ERR_OK) {
int status = DS_CreateStream(aCtx, &ctx);
if (status != DS_ERR_OK) {
res.string = strdup("");
return res;
}
@ -186,22 +186,22 @@ LocalDsSTT(ModelState* aCtx, const short* aBuffer, size_t aBufferSize,
const char *last = nullptr;
while (off < aBufferSize) {
size_t cur = aBufferSize - off > stream_size ? stream_size : aBufferSize - off;
STT_FeedAudioContent(ctx, aBuffer + off, cur);
DS_FeedAudioContent(ctx, aBuffer + off, cur);
off += cur;
const char* partial = STT_IntermediateDecode(ctx);
const char* partial = DS_IntermediateDecode(ctx);
if (last == nullptr || strcmp(last, partial)) {
printf("%s\n", partial);
last = partial;
} else {
STT_FreeString((char *) partial);
DS_FreeString((char *) partial);
}
}
if (last != nullptr) {
STT_FreeString((char *) last);
DS_FreeString((char *) last);
}
res.string = STT_FinishStream(ctx);
res.string = DS_FinishStream(ctx);
} else {
res.string = STT_SpeechToText(aCtx, aBuffer, aBufferSize);
res.string = DS_SpeechToText(aCtx, aBuffer, aBufferSize);
}
// sphinx-doc: c_ref_inference_stop
@ -367,7 +367,7 @@ GetAudioBuffer(const char* path, int desired_sample_rate)
void
ProcessFile(ModelState* context, const char* path, bool show_times)
{
ds_audio_buffer audio = GetAudioBuffer(path, STT_GetModelSampleRate(context));
ds_audio_buffer audio = GetAudioBuffer(path, DS_GetModelSampleRate(context));
// Pass audio to DeepSpeech
// We take half of buffer_size because buffer is a char* while
@ -381,7 +381,7 @@ ProcessFile(ModelState* context, const char* path, bool show_times)
if (result.string) {
printf("%s\n", result.string);
STT_FreeString((char*)result.string);
DS_FreeString((char*)result.string);
}
if (show_times) {
@ -400,16 +400,16 @@ main(int argc, char **argv)
// Initialise DeepSpeech
ModelState* ctx;
// sphinx-doc: c_ref_model_start
int status = STT_CreateModel(model, &ctx);
int status = DS_CreateModel(model, &ctx);
if (status != 0) {
char* error = STT_ErrorCodeToErrorMessage(status);
char* error = DS_ErrorCodeToErrorMessage(status);
fprintf(stderr, "Could not create model: %s\n", error);
free(error);
return 1;
}
if (set_beamwidth) {
status = STT_SetModelBeamWidth(ctx, beam_width);
status = DS_SetModelBeamWidth(ctx, beam_width);
if (status != 0) {
fprintf(stderr, "Could not set model beam width.\n");
return 1;
@ -417,13 +417,13 @@ main(int argc, char **argv)
}
if (scorer) {
status = STT_EnableExternalScorer(ctx, scorer);
status = DS_EnableExternalScorer(ctx, scorer);
if (status != 0) {
fprintf(stderr, "Could not enable external scorer.\n");
return 1;
}
if (set_alphabeta) {
status = STT_SetScorerAlphaBeta(ctx, lm_alpha, lm_beta);
status = DS_SetScorerAlphaBeta(ctx, lm_alpha, lm_beta);
if (status != 0) {
fprintf(stderr, "Error setting scorer alpha and beta.\n");
return 1;
@ -485,7 +485,7 @@ main(int argc, char **argv)
sox_quit();
#endif // NO_SOX
STT_FreeModel(ctx);
DS_FreeModel(ctx);
return 0;
}

Просмотреть файл

@ -10,7 +10,7 @@ __version__ = swigwrapper.__version__.decode('utf-8')
# Hack: import error codes by matching on their names, as SWIG unfortunately
# does not support binding enums to Python in a scoped manner yet.
for symbol in dir(swigwrapper):
if symbol.startswith('STT_ERR_'):
if symbol.startswith('DS_ERR_'):
globals()[symbol] = getattr(swigwrapper, symbol)
class Scorer(swigwrapper.Scorer):

Просмотреть файл

@ -74,13 +74,13 @@ int Scorer::load_lm(const std::string& lm_path)
// Check if file is readable to avoid KenLM throwing an exception
const char* filename = lm_path.c_str();
if (access(filename, R_OK) != 0) {
return STT_ERR_SCORER_UNREADABLE;
return DS_ERR_SCORER_UNREADABLE;
}
// Check if the file format is valid to avoid KenLM throwing an exception
lm::ngram::ModelType model_type;
if (!lm::ngram::RecognizeBinary(filename, model_type)) {
return STT_ERR_SCORER_INVALID_LM;
return DS_ERR_SCORER_INVALID_LM;
}
// Load the LM
@ -97,7 +97,7 @@ int Scorer::load_lm(const std::string& lm_path)
uint64_t trie_offset = language_model_->GetEndOfSearchOffset();
if (package_size <= trie_offset) {
// File ends without a trie structure
return STT_ERR_SCORER_NO_TRIE;
return DS_ERR_SCORER_NO_TRIE;
}
// Read metadata and trie from file
@ -113,7 +113,7 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path)
if (magic != MAGIC) {
std::cerr << "Error: Can't parse scorer file, invalid header. Try updating "
"your scorer file." << std::endl;
return STT_ERR_SCORER_INVALID_TRIE;
return DS_ERR_SCORER_INVALID_TRIE;
}
int version;
@ -125,10 +125,10 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path)
if (version < FILE_VERSION) {
std::cerr << "Update your scorer file.";
} else {
std::cerr << "Downgrade your scorer file or update your version of Mozilla Voice STT.";
std::cerr << "Downgrade your scorer file or update your version of DeepSpeech.";
}
std::cerr << std::endl;
return STT_ERR_SCORER_VERSION_MISMATCH;
return DS_ERR_SCORER_VERSION_MISMATCH;
}
fin.read(reinterpret_cast<char*>(&is_utf8_mode_), sizeof(is_utf8_mode_));
@ -143,7 +143,7 @@ int Scorer::load_trie(std::ifstream& fin, const std::string& file_path)
opt.mode = fst::FstReadOptions::MAP;
opt.source = file_path;
dictionary.reset(FstType::Read(fin, opt));
return STT_ERR_OK;
return DS_ERR_OK;
}
bool Scorer::save_dictionary(const std::string& path, bool append_instead_of_overwrite)

Просмотреть файл

@ -13,7 +13,7 @@
#include "path_trie.h"
#include "alphabet.h"
#include "mozilla_voice_stt.h"
#include "deepspeech.h"
const double OOV_SCORE = -1000.0;
const std::string START_TOKEN = "<s>";

Просмотреть файл

@ -42,14 +42,14 @@ namespace std {
%constant const char* __version__ = ds_version();
%constant const char* __git_version__ = ds_git_version();
// Import only the error code enum definitions from mozilla_voice_stt.h
// Import only the error code enum definitions from deepspeech.h
// We can't just do |%ignore "";| here because it affects this file globally (even
// files %include'd above). That causes SWIG to lose destructor information and
// leads to leaks of the wrapper objects.
// Instead we ignore functions and classes (structs), which are the only other
// things in mozilla_voice_stt.h. If we add some new construct to mozilla_voice_stt.h we need
// things in deepspeech.h. If we add some new construct to deepspeech.h we need
// to update the ignore rules here to avoid exposing unwanted APIs in the decoder
// package.
%rename("$ignore", %$isfunction) "";
%rename("$ignore", %$isclass) "";
%include "../mozilla_voice_stt.h"
%include "../deepspeech.h"

Просмотреть файл

@ -9,7 +9,7 @@
#include <utility>
#include <vector>
#include "mozilla_voice_stt.h"
#include "deepspeech.h"
#include "alphabet.h"
#include "modelstate.h"
@ -25,7 +25,7 @@
#ifdef __ANDROID__
#include <android/log.h>
#define LOG_TAG "libmozilla_voice_stt"
#define LOG_TAG "libdeepspeech"
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
#else
@ -263,23 +263,23 @@ StreamingState::processBatch(const vector<float>& buf, unsigned int n_steps)
}
int
STT_CreateModel(const char* aModelPath,
DS_CreateModel(const char* aModelPath,
ModelState** retval)
{
*retval = nullptr;
std::cerr << "TensorFlow: " << tf_local_git_version() << std::endl;
std::cerr << "Mozilla Voice STT: " << ds_git_version() << std::endl;
std::cerr << "DeepSpeech: " << ds_git_version() << std::endl;
#ifdef __ANDROID__
LOGE("TensorFlow: %s", tf_local_git_version());
LOGD("TensorFlow: %s", tf_local_git_version());
LOGE("Mozilla Voice STT: %s", ds_git_version());
LOGD("Mozilla Voice STT: %s", ds_git_version());
LOGE("DeepSpeech: %s", ds_git_version());
LOGD("DeepSpeech: %s", ds_git_version());
#endif
if (!aModelPath || strlen(aModelPath) < 1) {
std::cerr << "No model specified, cannot continue." << std::endl;
return STT_ERR_NO_MODEL;
return DS_ERR_NO_MODEL;
}
std::unique_ptr<ModelState> model(
@ -292,79 +292,79 @@ STT_CreateModel(const char* aModelPath,
if (!model) {
std::cerr << "Could not allocate model state." << std::endl;
return STT_ERR_FAIL_CREATE_MODEL;
return DS_ERR_FAIL_CREATE_MODEL;
}
int err = model->init(aModelPath);
if (err != STT_ERR_OK) {
if (err != DS_ERR_OK) {
return err;
}
*retval = model.release();
return STT_ERR_OK;
return DS_ERR_OK;
}
unsigned int
STT_GetModelBeamWidth(const ModelState* aCtx)
DS_GetModelBeamWidth(const ModelState* aCtx)
{
return aCtx->beam_width_;
}
int
STT_SetModelBeamWidth(ModelState* aCtx, unsigned int aBeamWidth)
DS_SetModelBeamWidth(ModelState* aCtx, unsigned int aBeamWidth)
{
aCtx->beam_width_ = aBeamWidth;
return 0;
}
int
STT_GetModelSampleRate(const ModelState* aCtx)
DS_GetModelSampleRate(const ModelState* aCtx)
{
return aCtx->sample_rate_;
}
void
STT_FreeModel(ModelState* ctx)
DS_FreeModel(ModelState* ctx)
{
delete ctx;
}
int
STT_EnableExternalScorer(ModelState* aCtx,
DS_EnableExternalScorer(ModelState* aCtx,
const char* aScorerPath)
{
std::unique_ptr<Scorer> scorer(new Scorer());
int err = scorer->init(aScorerPath, aCtx->alphabet_);
if (err != 0) {
return STT_ERR_INVALID_SCORER;
return DS_ERR_INVALID_SCORER;
}
aCtx->scorer_ = std::move(scorer);
return STT_ERR_OK;
return DS_ERR_OK;
}
int
STT_DisableExternalScorer(ModelState* aCtx)
DS_DisableExternalScorer(ModelState* aCtx)
{
if (aCtx->scorer_) {
aCtx->scorer_.reset();
return STT_ERR_OK;
return DS_ERR_OK;
}
return STT_ERR_SCORER_NOT_ENABLED;
return DS_ERR_SCORER_NOT_ENABLED;
}
int STT_SetScorerAlphaBeta(ModelState* aCtx,
int DS_SetScorerAlphaBeta(ModelState* aCtx,
float aAlpha,
float aBeta)
{
if (aCtx->scorer_) {
aCtx->scorer_->reset_params(aAlpha, aBeta);
return STT_ERR_OK;
return DS_ERR_OK;
}
return STT_ERR_SCORER_NOT_ENABLED;
return DS_ERR_SCORER_NOT_ENABLED;
}
int
STT_CreateStream(ModelState* aCtx,
DS_CreateStream(ModelState* aCtx,
StreamingState** retval)
{
*retval = nullptr;
@ -372,7 +372,7 @@ STT_CreateStream(ModelState* aCtx,
std::unique_ptr<StreamingState> ctx(new StreamingState());
if (!ctx) {
std::cerr << "Could not allocate streaming state." << std::endl;
return STT_ERR_FAIL_CREATE_STREAM;
return DS_ERR_FAIL_CREATE_STREAM;
}
ctx->audio_buffer_.reserve(aCtx->audio_win_len_);
@ -393,11 +393,11 @@ STT_CreateStream(ModelState* aCtx,
aCtx->scorer_);
*retval = ctx.release();
return STT_ERR_OK;
return DS_ERR_OK;
}
void
STT_FeedAudioContent(StreamingState* aSctx,
DS_FeedAudioContent(StreamingState* aSctx,
const short* aBuffer,
unsigned int aBufferSize)
{
@ -405,32 +405,32 @@ STT_FeedAudioContent(StreamingState* aSctx,
}
char*
STT_IntermediateDecode(const StreamingState* aSctx)
DS_IntermediateDecode(const StreamingState* aSctx)
{
return aSctx->intermediateDecode();
}
Metadata*
STT_IntermediateDecodeWithMetadata(const StreamingState* aSctx,
DS_IntermediateDecodeWithMetadata(const StreamingState* aSctx,
unsigned int aNumResults)
{
return aSctx->intermediateDecodeWithMetadata(aNumResults);
}
char*
STT_FinishStream(StreamingState* aSctx)
DS_FinishStream(StreamingState* aSctx)
{
char* str = aSctx->finishStream();
STT_FreeStream(aSctx);
DS_FreeStream(aSctx);
return str;
}
Metadata*
STT_FinishStreamWithMetadata(StreamingState* aSctx,
DS_FinishStreamWithMetadata(StreamingState* aSctx,
unsigned int aNumResults)
{
Metadata* result = aSctx->finishStreamWithMetadata(aNumResults);
STT_FreeStream(aSctx);
DS_FreeStream(aSctx);
return result;
}
@ -440,41 +440,41 @@ CreateStreamAndFeedAudioContent(ModelState* aCtx,
unsigned int aBufferSize)
{
StreamingState* ctx;
int status = STT_CreateStream(aCtx, &ctx);
if (status != STT_ERR_OK) {
int status = DS_CreateStream(aCtx, &ctx);
if (status != DS_ERR_OK) {
return nullptr;
}
STT_FeedAudioContent(ctx, aBuffer, aBufferSize);
DS_FeedAudioContent(ctx, aBuffer, aBufferSize);
return ctx;
}
char*
STT_SpeechToText(ModelState* aCtx,
DS_SpeechToText(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize)
{
StreamingState* ctx = CreateStreamAndFeedAudioContent(aCtx, aBuffer, aBufferSize);
return STT_FinishStream(ctx);
return DS_FinishStream(ctx);
}
Metadata*
STT_SpeechToTextWithMetadata(ModelState* aCtx,
DS_SpeechToTextWithMetadata(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize,
unsigned int aNumResults)
{
StreamingState* ctx = CreateStreamAndFeedAudioContent(aCtx, aBuffer, aBufferSize);
return STT_FinishStreamWithMetadata(ctx, aNumResults);
return DS_FinishStreamWithMetadata(ctx, aNumResults);
}
void
STT_FreeStream(StreamingState* aSctx)
DS_FreeStream(StreamingState* aSctx)
{
delete aSctx;
}
void
STT_FreeMetadata(Metadata* m)
DS_FreeMetadata(Metadata* m)
{
if (m) {
for (int i = 0; i < m->num_transcripts; ++i) {
@ -491,13 +491,13 @@ STT_FreeMetadata(Metadata* m)
}
void
STT_FreeString(char* str)
DS_FreeString(char* str)
{
free(str);
}
char*
STT_Version()
DS_Version()
{
return strdup(ds_version());
}

Просмотреть файл

@ -1,5 +1,5 @@
#ifndef MOZILLA_VOICE_STT_H
#define MOZILLA_VOICE_STT_H
#ifndef DEEPSPEECH_H
#define DEEPSPEECH_H
#ifdef __cplusplus
extern "C" {
@ -7,12 +7,12 @@ extern "C" {
#ifndef SWIG
#if defined _MSC_VER
#define STT_EXPORT __declspec(dllexport)
#define DEEPSPEECH_EXPORT __declspec(dllexport)
#else
#define STT_EXPORT __attribute__ ((visibility("default")))
#define DEEPSPEECH_EXPORT __attribute__ ((visibility("default")))
#endif /*End of _MSC_VER*/
#else
#define STT_EXPORT
#define DEEPSPEECH_EXPORT
#endif
typedef struct ModelState ModelState;
@ -61,89 +61,89 @@ typedef struct Metadata {
// sphinx-doc: error_code_listing_start
#define STT_FOR_EACH_ERROR(APPLY) \
APPLY(STT_ERR_OK, 0x0000, "No error.") \
APPLY(STT_ERR_NO_MODEL, 0x1000, "Missing model information.") \
APPLY(STT_ERR_INVALID_ALPHABET, 0x2000, "Invalid alphabet embedded in model. (Data corruption?)") \
APPLY(STT_ERR_INVALID_SHAPE, 0x2001, "Invalid model shape.") \
APPLY(STT_ERR_INVALID_SCORER, 0x2002, "Invalid scorer file.") \
APPLY(STT_ERR_MODEL_INCOMPATIBLE, 0x2003, "Incompatible model.") \
APPLY(STT_ERR_SCORER_NOT_ENABLED, 0x2004, "External scorer is not enabled.") \
APPLY(STT_ERR_SCORER_UNREADABLE, 0x2005, "Could not read scorer file.") \
APPLY(STT_ERR_SCORER_INVALID_LM, 0x2006, "Could not recognize language model header in scorer.") \
APPLY(STT_ERR_SCORER_NO_TRIE, 0x2007, "Reached end of scorer file before loading vocabulary trie.") \
APPLY(STT_ERR_SCORER_INVALID_TRIE, 0x2008, "Invalid magic in trie header.") \
APPLY(STT_ERR_SCORER_VERSION_MISMATCH, 0x2009, "Scorer file version does not match expected version.") \
APPLY(STT_ERR_FAIL_INIT_MMAP, 0x3000, "Failed to initialize memory mapped model.") \
APPLY(STT_ERR_FAIL_INIT_SESS, 0x3001, "Failed to initialize the session.") \
APPLY(STT_ERR_FAIL_INTERPRETER, 0x3002, "Interpreter failed.") \
APPLY(STT_ERR_FAIL_RUN_SESS, 0x3003, "Failed to run the session.") \
APPLY(STT_ERR_FAIL_CREATE_STREAM, 0x3004, "Error creating the stream.") \
APPLY(STT_ERR_FAIL_READ_PROTOBUF, 0x3005, "Error reading the proto buffer model file.") \
APPLY(STT_ERR_FAIL_CREATE_SESS, 0x3006, "Failed to create session.") \
APPLY(STT_ERR_FAIL_CREATE_MODEL, 0x3007, "Could not allocate model state.")
#define DS_FOR_EACH_ERROR(APPLY) \
APPLY(DS_ERR_OK, 0x0000, "No error.") \
APPLY(DS_ERR_NO_MODEL, 0x1000, "Missing model information.") \
APPLY(DS_ERR_INVALID_ALPHABET, 0x2000, "Invalid alphabet embedded in model. (Data corruption?)") \
APPLY(DS_ERR_INVALID_SHAPE, 0x2001, "Invalid model shape.") \
APPLY(DS_ERR_INVALID_SCORER, 0x2002, "Invalid scorer file.") \
APPLY(DS_ERR_MODEL_INCOMPATIBLE, 0x2003, "Incompatible model.") \
APPLY(DS_ERR_SCORER_NOT_ENABLED, 0x2004, "External scorer is not enabled.") \
APPLY(DS_ERR_SCORER_UNREADABLE, 0x2005, "Could not read scorer file.") \
APPLY(DS_ERR_SCORER_INVALID_LM, 0x2006, "Could not recognize language model header in scorer.") \
APPLY(DS_ERR_SCORER_NO_TRIE, 0x2007, "Reached end of scorer file before loading vocabulary trie.") \
APPLY(DS_ERR_SCORER_INVALID_TRIE, 0x2008, "Invalid magic in trie header.") \
APPLY(DS_ERR_SCORER_VERSION_MISMATCH, 0x2009, "Scorer file version does not match expected version.") \
APPLY(DS_ERR_FAIL_INIT_MMAP, 0x3000, "Failed to initialize memory mapped model.") \
APPLY(DS_ERR_FAIL_INIT_SESS, 0x3001, "Failed to initialize the session.") \
APPLY(DS_ERR_FAIL_INTERPRETER, 0x3002, "Interpreter failed.") \
APPLY(DS_ERR_FAIL_RUN_SESS, 0x3003, "Failed to run the session.") \
APPLY(DS_ERR_FAIL_CREATE_STREAM, 0x3004, "Error creating the stream.") \
APPLY(DS_ERR_FAIL_READ_PROTOBUF, 0x3005, "Error reading the proto buffer model file.") \
APPLY(DS_ERR_FAIL_CREATE_SESS, 0x3006, "Failed to create session.") \
APPLY(DS_ERR_FAIL_CREATE_MODEL, 0x3007, "Could not allocate model state.")
// sphinx-doc: error_code_listing_end
enum STT_Error_Codes
enum DeepSpeech_Error_Codes
{
#define DEFINE(NAME, VALUE, DESC) NAME = VALUE,
STT_FOR_EACH_ERROR(DEFINE)
DS_FOR_EACH_ERROR(DEFINE)
#undef DEFINE
};
/**
* @brief An object providing an interface to a trained Mozilla Voice STT model.
* @brief An object providing an interface to a trained DeepSpeech model.
*
* @param aModelPath The path to the frozen model graph.
* @param[out] retval a ModelState pointer
*
* @return Zero on success, non-zero on failure.
*/
STT_EXPORT
int STT_CreateModel(const char* aModelPath,
ModelState** retval);
DEEPSPEECH_EXPORT
int DS_CreateModel(const char* aModelPath,
ModelState** retval);
/**
* @brief Get beam width value used by the model. If {@link STT_SetModelBeamWidth}
* @brief Get beam width value used by the model. If {@link DS_SetModelBeamWidth}
* was not called before, will return the default value loaded from the
* model file.
*
* @param aCtx A ModelState pointer created with {@link STT_CreateModel}.
* @param aCtx A ModelState pointer created with {@link DS_CreateModel}.
*
* @return Beam width value used by the model.
*/
STT_EXPORT
unsigned int STT_GetModelBeamWidth(const ModelState* aCtx);
DEEPSPEECH_EXPORT
unsigned int DS_GetModelBeamWidth(const ModelState* aCtx);
/**
* @brief Set beam width value used by the model.
*
* @param aCtx A ModelState pointer created with {@link STT_CreateModel}.
* @param aCtx A ModelState pointer created with {@link DS_CreateModel}.
* @param aBeamWidth The beam width used by the model. A larger beam width value
* generates better results at the cost of decoding time.
*
* @return Zero on success, non-zero on failure.
*/
STT_EXPORT
int STT_SetModelBeamWidth(ModelState* aCtx,
unsigned int aBeamWidth);
DEEPSPEECH_EXPORT
int DS_SetModelBeamWidth(ModelState* aCtx,
unsigned int aBeamWidth);
/**
* @brief Return the sample rate expected by a model.
*
* @param aCtx A ModelState pointer created with {@link STT_CreateModel}.
* @param aCtx A ModelState pointer created with {@link DS_CreateModel}.
*
* @return Sample rate expected by the model for its input.
*/
STT_EXPORT
int STT_GetModelSampleRate(const ModelState* aCtx);
DEEPSPEECH_EXPORT
int DS_GetModelSampleRate(const ModelState* aCtx);
/**
* @brief Frees associated resources and destroys model object.
*/
STT_EXPORT
void STT_FreeModel(ModelState* ctx);
DEEPSPEECH_EXPORT
void DS_FreeModel(ModelState* ctx);
/**
* @brief Enable decoding using an external scorer.
@ -153,9 +153,9 @@ void STT_FreeModel(ModelState* ctx);
*
* @return Zero on success, non-zero on failure (invalid arguments).
*/
STT_EXPORT
int STT_EnableExternalScorer(ModelState* aCtx,
const char* aScorerPath);
DEEPSPEECH_EXPORT
int DS_EnableExternalScorer(ModelState* aCtx,
const char* aScorerPath);
/**
* @brief Disable decoding using an external scorer.
@ -164,8 +164,8 @@ int STT_EnableExternalScorer(ModelState* aCtx,
*
* @return Zero on success, non-zero on failure.
*/
STT_EXPORT
int STT_DisableExternalScorer(ModelState* aCtx);
DEEPSPEECH_EXPORT
int DS_DisableExternalScorer(ModelState* aCtx);
/**
* @brief Set hyperparameters alpha and beta of the external scorer.
@ -176,13 +176,13 @@ int STT_DisableExternalScorer(ModelState* aCtx);
*
* @return Zero on success, non-zero on failure.
*/
STT_EXPORT
int STT_SetScorerAlphaBeta(ModelState* aCtx,
float aAlpha,
float aBeta);
DEEPSPEECH_EXPORT
int DS_SetScorerAlphaBeta(ModelState* aCtx,
float aAlpha,
float aBeta);
/**
* @brief Use the Mozilla Voice STT model to convert speech to text.
* @brief Use the DeepSpeech model to convert speech to text.
*
* @param aCtx The ModelState pointer for the model to use.
* @param aBuffer A 16-bit, mono raw audio signal at the appropriate
@ -190,15 +190,15 @@ int STT_SetScorerAlphaBeta(ModelState* aCtx,
* @param aBufferSize The number of samples in the audio signal.
*
* @return The STT result. The user is responsible for freeing the string using
* {@link STT_FreeString()}. Returns NULL on error.
* {@link DS_FreeString()}. Returns NULL on error.
*/
STT_EXPORT
char* STT_SpeechToText(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize);
DEEPSPEECH_EXPORT
char* DS_SpeechToText(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize);
/**
* @brief Use the Mozilla Voice STT model to convert speech to text and output results
* @brief Use the DeepSpeech model to convert speech to text and output results
* including metadata.
*
* @param aCtx The ModelState pointer for the model to use.
@ -209,19 +209,19 @@ char* STT_SpeechToText(ModelState* aCtx,
*
* @return Metadata struct containing multiple CandidateTranscript structs. Each
* transcript has per-token metadata including timing information. The
* user is responsible for freeing Metadata by calling {@link STT_FreeMetadata()}.
* user is responsible for freeing Metadata by calling {@link DS_FreeMetadata()}.
* Returns NULL on error.
*/
STT_EXPORT
Metadata* STT_SpeechToTextWithMetadata(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize,
unsigned int aNumResults);
DEEPSPEECH_EXPORT
Metadata* DS_SpeechToTextWithMetadata(ModelState* aCtx,
const short* aBuffer,
unsigned int aBufferSize,
unsigned int aNumResults);
/**
* @brief Create a new streaming inference state. The streaming state returned
* by this function can then be passed to {@link STT_FeedAudioContent()}
* and {@link STT_FinishStream()}.
* by this function can then be passed to {@link DS_FeedAudioContent()}
* and {@link DS_FinishStream()}.
*
* @param aCtx The ModelState pointer for the model to use.
* @param[out] retval an opaque pointer that represents the streaming state. Can
@ -229,129 +229,129 @@ Metadata* STT_SpeechToTextWithMetadata(ModelState* aCtx,
*
* @return Zero for success, non-zero on failure.
*/
STT_EXPORT
int STT_CreateStream(ModelState* aCtx,
StreamingState** retval);
DEEPSPEECH_EXPORT
int DS_CreateStream(ModelState* aCtx,
StreamingState** retval);
/**
* @brief Feed audio samples to an ongoing streaming inference.
*
* @param aSctx A streaming state pointer returned by {@link STT_CreateStream()}.
* @param aSctx A streaming state pointer returned by {@link DS_CreateStream()}.
* @param aBuffer An array of 16-bit, mono raw audio samples at the
* appropriate sample rate (matching what the model was trained on).
* @param aBufferSize The number of samples in @p aBuffer.
*/
STT_EXPORT
void STT_FeedAudioContent(StreamingState* aSctx,
const short* aBuffer,
unsigned int aBufferSize);
DEEPSPEECH_EXPORT
void DS_FeedAudioContent(StreamingState* aSctx,
const short* aBuffer,
unsigned int aBufferSize);
/**
* @brief Compute the intermediate decoding of an ongoing streaming inference.
*
* @param aSctx A streaming state pointer returned by {@link STT_CreateStream()}.
* @param aSctx A streaming state pointer returned by {@link DS_CreateStream()}.
*
* @return The STT intermediate result. The user is responsible for freeing the
* string using {@link STT_FreeString()}.
* string using {@link DS_FreeString()}.
*/
STT_EXPORT
char* STT_IntermediateDecode(const StreamingState* aSctx);
DEEPSPEECH_EXPORT
char* DS_IntermediateDecode(const StreamingState* aSctx);
/**
* @brief Compute the intermediate decoding of an ongoing streaming inference,
* return results including metadata.
*
* @param aSctx A streaming state pointer returned by {@link STT_CreateStream()}.
* @param aSctx A streaming state pointer returned by {@link DS_CreateStream()}.
* @param aNumResults The number of candidate transcripts to return.
*
* @return Metadata struct containing multiple candidate transcripts. Each transcript
* has per-token metadata including timing information. The user is
* responsible for freeing Metadata by calling {@link STT_FreeMetadata()}.
* responsible for freeing Metadata by calling {@link DS_FreeMetadata()}.
* Returns NULL on error.
*/
STT_EXPORT
Metadata* STT_IntermediateDecodeWithMetadata(const StreamingState* aSctx,
unsigned int aNumResults);
DEEPSPEECH_EXPORT
Metadata* DS_IntermediateDecodeWithMetadata(const StreamingState* aSctx,
unsigned int aNumResults);
/**
* @brief Compute the final decoding of an ongoing streaming inference and return
* the result. Signals the end of an ongoing streaming inference.
*
* @param aSctx A streaming state pointer returned by {@link STT_CreateStream()}.
* @param aSctx A streaming state pointer returned by {@link DS_CreateStream()}.
*
* @return The STT result. The user is responsible for freeing the string using
* {@link STT_FreeString()}.
* {@link DS_FreeString()}.
*
* @note This method will free the state pointer (@p aSctx).
*/
STT_EXPORT
char* STT_FinishStream(StreamingState* aSctx);
DEEPSPEECH_EXPORT
char* DS_FinishStream(StreamingState* aSctx);
/**
* @brief Compute the final decoding of an ongoing streaming inference and return
* results including metadata. Signals the end of an ongoing streaming
* inference.
*
* @param aSctx A streaming state pointer returned by {@link STT_CreateStream()}.
* @param aSctx A streaming state pointer returned by {@link DS_CreateStream()}.
* @param aNumResults The number of candidate transcripts to return.
*
* @return Metadata struct containing multiple candidate transcripts. Each transcript
* has per-token metadata including timing information. The user is
* responsible for freeing Metadata by calling {@link STT_FreeMetadata()}.
* responsible for freeing Metadata by calling {@link DS_FreeMetadata()}.
* Returns NULL on error.
*
* @note This method will free the state pointer (@p aSctx).
*/
STT_EXPORT
Metadata* STT_FinishStreamWithMetadata(StreamingState* aSctx,
unsigned int aNumResults);
DEEPSPEECH_EXPORT
Metadata* DS_FinishStreamWithMetadata(StreamingState* aSctx,
unsigned int aNumResults);
/**
* @brief Destroy a streaming state without decoding the computed logits. This
* can be used if you no longer need the result of an ongoing streaming
* inference and don't want to perform a costly decode operation.
*
* @param aSctx A streaming state pointer returned by {@link STT_CreateStream()}.
* @param aSctx A streaming state pointer returned by {@link DS_CreateStream()}.
*
* @note This method will free the state pointer (@p aSctx).
*/
STT_EXPORT
void STT_FreeStream(StreamingState* aSctx);
DEEPSPEECH_EXPORT
void DS_FreeStream(StreamingState* aSctx);
/**
* @brief Free memory allocated for metadata information.
*/
STT_EXPORT
void STT_FreeMetadata(Metadata* m);
DEEPSPEECH_EXPORT
void DS_FreeMetadata(Metadata* m);
/**
* @brief Free a char* string returned by the Mozilla Voice STT API.
* @brief Free a char* string returned by the DeepSpeech API.
*/
STT_EXPORT
void STT_FreeString(char* str);
DEEPSPEECH_EXPORT
void DS_FreeString(char* str);
/**
* @brief Returns the version of this library. The returned version is a semantic
* version (SemVer 2.0.0). The string returned must be freed with {@link STT_FreeString()}.
* version (SemVer 2.0.0). The string returned must be freed with {@link DS_FreeString()}.
*
* @return The version string.
*/
STT_EXPORT
char* STT_Version();
DEEPSPEECH_EXPORT
char* DS_Version();
/**
* @brief Returns a textual description corresponding to an error code.
* The string returned must be freed with @{link STT_FreeString()}.
* The string returned must be freed with @{link DS_FreeString()}.
*
* @return The error description.
*/
STT_EXPORT
char* STT_ErrorCodeToErrorMessage(int aErrorCode);
DEEPSPEECH_EXPORT
char* DS_ErrorCodeToErrorMessage(int aErrorCode);
#undef STT_EXPORT
#undef DEEPSPEECH_EXPORT
#ifdef __cplusplus
}
#endif
#endif /* MOZILLA_VOICE_STT_H */
#endif /* DEEPSPEECH_H */

Просмотреть файл

@ -1,8 +1,8 @@
#include "mozilla_voice_stt.h"
#include "deepspeech.h"
#include <string.h>
char*
STT_ErrorCodeToErrorMessage(int aErrorCode)
DS_ErrorCodeToErrorMessage(int aErrorCode)
{
#define RETURN_MESSAGE(NAME, VALUE, DESC) \
case NAME: \
@ -10,7 +10,7 @@ STT_ErrorCodeToErrorMessage(int aErrorCode)
switch(aErrorCode)
{
STT_FOR_EACH_ERROR(RETURN_MESSAGE)
DS_FOR_EACH_ERROR(RETURN_MESSAGE)
default:
return strdup("Unknown error, please make sure you are using the correct native binary.");
}

Просмотреть файл

@ -18,9 +18,9 @@ ifeq ($(findstring _NT,$(OS)),_NT)
PLATFORM_EXE_SUFFIX := .exe
endif
DEEPSPEECH_BIN := mozilla_voice_stt$(PLATFORM_EXE_SUFFIX)
DEEPSPEECH_BIN := deepspeech$(PLATFORM_EXE_SUFFIX)
CFLAGS_DEEPSPEECH := -std=c++11 -o $(DEEPSPEECH_BIN)
LINK_DEEPSPEECH := -lmozilla_voice_stt
LINK_DEEPSPEECH := -ldeepspeech
LINK_PATH_DEEPSPEECH := -L${TFDIR}/bazel-bin/native_client
ifeq ($(TARGET),host)
@ -53,7 +53,7 @@ TOOL_CC := cl.exe
TOOL_CXX := cl.exe
TOOL_LD := link.exe
TOOL_LIBEXE := lib.exe
LINK_DEEPSPEECH := $(TFDIR)\bazel-bin\native_client\libmozilla_voice_stt.so.if.lib
LINK_DEEPSPEECH := $(TFDIR)\bazel-bin\native_client\libdeepspeech.so.if.lib
LINK_PATH_DEEPSPEECH :=
CFLAGS_DEEPSPEECH := -nologo -Fe$(DEEPSPEECH_BIN)
SOX_CFLAGS :=
@ -174,7 +174,7 @@ define copy_missing_libs
new_missing="$$( (for f in $$(otool -L $$lib 2>/dev/null | tail -n +2 | awk '{ print $$1 }' | grep -v '$$lib'); do ls -hal $$f; done;) 2>&1 | grep 'No such' | cut -d':' -f2 | xargs basename -a)"; \
missing_libs="$$missing_libs $$new_missing"; \
elif [ "$(OS)" = "${TC_MSYS_VERSION}" ]; then \
missing_libs="libmozilla_voice_stt.so"; \
missing_libs="libdeepspeech.so"; \
else \
missing_libs="$$missing_libs $$($(LDD) $$lib | grep 'not found' | awk '{ print $$1 }')"; \
fi; \

Просмотреть файл

@ -2,9 +2,9 @@ Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.30204.135
MinimumVisualStudioVersion = 10.0.40219.1
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "MozillaVoiceSttClient", "MozillaVoiceSttClient\MozillaVoiceSttClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DeepSpeechClient", "DeepSpeechClient\DeepSpeechClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceSttConsole", "MozillaVoiceSttConsole\MozillaVoiceSttConsole.csproj", "{312965E5-C4F6-4D95-BA64-79906B8BC7AC}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeechConsole", "DeepSpeechConsole\DeepSpeechConsole.csproj", "{312965E5-C4F6-4D95-BA64-79906B8BC7AC}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution

Просмотреть файл

@ -1,34 +1,34 @@
using MozillaVoiceSttClient.Interfaces;
using MozillaVoiceSttClient.Extensions;
using DeepSpeechClient.Interfaces;
using DeepSpeechClient.Extensions;
using System;
using System.IO;
using MozillaVoiceSttClient.Enums;
using MozillaVoiceSttClient.Models;
using DeepSpeechClient.Enums;
using DeepSpeechClient.Models;
namespace MozillaVoiceSttClient
namespace DeepSpeechClient
{
/// <summary>
/// Concrete implementation of <see cref="MozillaVoiceStt.Interfaces.IMozillaVoiceSttModel"/>.
/// Concrete implementation of <see cref="DeepSpeechClient.Interfaces.IDeepSpeech"/>.
/// </summary>
public class MozillaVoiceSttModel : IMozillaVoiceSttModel
public class DeepSpeech : IDeepSpeech
{
private unsafe IntPtr** _modelStatePP;
/// <summary>
/// Initializes a new instance of <see cref="MozillaVoiceSttModel"/> class and creates a new acoustic model.
/// Initializes a new instance of <see cref="DeepSpeech"/> class and creates a new acoustic model.
/// </summary>
/// <param name="aModelPath">The path to the frozen model graph.</param>
/// <exception cref="ArgumentException">Thrown when the native binary failed to create the model.</exception>
public MozillaVoiceSttModel(string aModelPath)
public DeepSpeech(string aModelPath)
{
CreateModel(aModelPath);
}
#region IMozillaVoiceSttModel
#region IDeepSpeech
/// <summary>
/// Create an object providing an interface to a trained Mozilla Voice STT model.
/// Create an object providing an interface to a trained DeepSpeech model.
/// </summary>
/// <param name="aModelPath">The path to the frozen model graph.</param>
/// <exception cref="ArgumentException">Thrown when the native binary failed to create the model.</exception>
@ -48,7 +48,7 @@ namespace MozillaVoiceSttClient
{
throw new FileNotFoundException(exceptionMessage);
}
var resultCode = NativeImp.STT_CreateModel(aModelPath,
var resultCode = NativeImp.DS_CreateModel(aModelPath,
ref _modelStatePP);
EvaluateResultCode(resultCode);
}
@ -60,7 +60,7 @@ namespace MozillaVoiceSttClient
/// <returns>Beam width value used by the model.</returns>
public unsafe uint GetModelBeamWidth()
{
return NativeImp.STT_GetModelBeamWidth(_modelStatePP);
return NativeImp.DS_GetModelBeamWidth(_modelStatePP);
}
/// <summary>
@ -70,7 +70,7 @@ namespace MozillaVoiceSttClient
/// <exception cref="ArgumentException">Thrown on failure.</exception>
public unsafe void SetModelBeamWidth(uint aBeamWidth)
{
var resultCode = NativeImp.STT_SetModelBeamWidth(_modelStatePP, aBeamWidth);
var resultCode = NativeImp.DS_SetModelBeamWidth(_modelStatePP, aBeamWidth);
EvaluateResultCode(resultCode);
}
@ -80,7 +80,7 @@ namespace MozillaVoiceSttClient
/// <returns>Sample rate.</returns>
public unsafe int GetModelSampleRate()
{
return NativeImp.STT_GetModelSampleRate(_modelStatePP);
return NativeImp.DS_GetModelSampleRate(_modelStatePP);
}
/// <summary>
@ -89,9 +89,9 @@ namespace MozillaVoiceSttClient
/// <param name="resultCode">Native result code.</param>
private void EvaluateResultCode(ErrorCodes resultCode)
{
if (resultCode != ErrorCodes.STT_ERR_OK)
if (resultCode != ErrorCodes.DS_ERR_OK)
{
throw new ArgumentException(NativeImp.STT_ErrorCodeToErrorMessage((int)resultCode).PtrToString());
throw new ArgumentException(NativeImp.DS_ErrorCodeToErrorMessage((int)resultCode).PtrToString());
}
}
@ -100,7 +100,7 @@ namespace MozillaVoiceSttClient
/// </summary>
public unsafe void Dispose()
{
NativeImp.STT_FreeModel(_modelStatePP);
NativeImp.DS_FreeModel(_modelStatePP);
}
/// <summary>
@ -120,7 +120,7 @@ namespace MozillaVoiceSttClient
throw new FileNotFoundException($"Cannot find the scorer file: {aScorerPath}");
}
var resultCode = NativeImp.STT_EnableExternalScorer(_modelStatePP, aScorerPath);
var resultCode = NativeImp.DS_EnableExternalScorer(_modelStatePP, aScorerPath);
EvaluateResultCode(resultCode);
}
@ -130,7 +130,7 @@ namespace MozillaVoiceSttClient
/// <exception cref="ArgumentException">Thrown when an external scorer is not enabled.</exception>
public unsafe void DisableExternalScorer()
{
var resultCode = NativeImp.STT_DisableExternalScorer(_modelStatePP);
var resultCode = NativeImp.DS_DisableExternalScorer(_modelStatePP);
EvaluateResultCode(resultCode);
}
@ -142,7 +142,7 @@ namespace MozillaVoiceSttClient
/// <exception cref="ArgumentException">Thrown when an external scorer is not enabled.</exception>
public unsafe void SetScorerAlphaBeta(float aAlpha, float aBeta)
{
var resultCode = NativeImp.STT_SetScorerAlphaBeta(_modelStatePP,
var resultCode = NativeImp.DS_SetScorerAlphaBeta(_modelStatePP,
aAlpha,
aBeta);
EvaluateResultCode(resultCode);
@ -153,9 +153,9 @@ namespace MozillaVoiceSttClient
/// </summary>
/// <param name="stream">Instance of the stream to feed the data.</param>
/// <param name="aBuffer">An array of 16-bit, mono raw audio samples at the appropriate sample rate (matching what the model was trained on).</param>
public unsafe void FeedAudioContent(MozillaVoiceSttStream stream, short[] aBuffer, uint aBufferSize)
public unsafe void FeedAudioContent(DeepSpeechStream stream, short[] aBuffer, uint aBufferSize)
{
NativeImp.STT_FeedAudioContent(stream.GetNativePointer(), aBuffer, aBufferSize);
NativeImp.DS_FeedAudioContent(stream.GetNativePointer(), aBuffer, aBufferSize);
}
/// <summary>
@ -163,9 +163,9 @@ namespace MozillaVoiceSttClient
/// </summary>
/// <param name="stream">Instance of the stream to finish.</param>
/// <returns>The STT result.</returns>
public unsafe string FinishStream(MozillaVoiceSttStream stream)
public unsafe string FinishStream(DeepSpeechStream stream)
{
return NativeImp.STT_FinishStream(stream.GetNativePointer()).PtrToString();
return NativeImp.DS_FinishStream(stream.GetNativePointer()).PtrToString();
}
/// <summary>
@ -174,9 +174,9 @@ namespace MozillaVoiceSttClient
/// <param name="stream">Instance of the stream to finish.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The extended metadata result.</returns>
public unsafe Metadata FinishStreamWithMetadata(MozillaVoiceSttStream stream, uint aNumResults)
public unsafe Metadata FinishStreamWithMetadata(DeepSpeechStream stream, uint aNumResults)
{
return NativeImp.STT_FinishStreamWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
return NativeImp.DS_FinishStreamWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
}
/// <summary>
@ -184,9 +184,9 @@ namespace MozillaVoiceSttClient
/// </summary>
/// <param name="stream">Instance of the stream to decode.</param>
/// <returns>The STT intermediate result.</returns>
public unsafe string IntermediateDecode(MozillaVoiceSttStream stream)
public unsafe string IntermediateDecode(DeepSpeechStream stream)
{
return NativeImp.STT_IntermediateDecode(stream.GetNativePointer()).PtrToString();
return NativeImp.DS_IntermediateDecode(stream.GetNativePointer()).PtrToString();
}
/// <summary>
@ -195,9 +195,9 @@ namespace MozillaVoiceSttClient
/// <param name="stream">Instance of the stream to decode.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The STT intermediate result.</returns>
public unsafe Metadata IntermediateDecodeWithMetadata(MozillaVoiceSttStream stream, uint aNumResults)
public unsafe Metadata IntermediateDecodeWithMetadata(DeepSpeechStream stream, uint aNumResults)
{
return NativeImp.STT_IntermediateDecodeWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
return NativeImp.DS_IntermediateDecodeWithMetadata(stream.GetNativePointer(), aNumResults).PtrToMetadata();
}
/// <summary>
@ -206,18 +206,18 @@ namespace MozillaVoiceSttClient
/// </summary>
public unsafe string Version()
{
return NativeImp.STT_Version().PtrToString();
return NativeImp.DS_Version().PtrToString();
}
/// <summary>
/// Creates a new streaming inference state.
/// </summary>
public unsafe MozillaVoiceSttStream CreateStream()
public unsafe DeepSpeechStream CreateStream()
{
IntPtr** streamingStatePointer = null;
var resultCode = NativeImp.STT_CreateStream(_modelStatePP, ref streamingStatePointer);
var resultCode = NativeImp.DS_CreateStream(_modelStatePP, ref streamingStatePointer);
EvaluateResultCode(resultCode);
return new MozillaVoiceSttStream(streamingStatePointer);
return new DeepSpeechStream(streamingStatePointer);
}
/// <summary>
@ -225,25 +225,25 @@ namespace MozillaVoiceSttClient
/// This can be used if you no longer need the result of an ongoing streaming
/// inference and don't want to perform a costly decode operation.
/// </summary>
public unsafe void FreeStream(MozillaVoiceSttStream stream)
public unsafe void FreeStream(DeepSpeechStream stream)
{
NativeImp.STT_FreeStream(stream.GetNativePointer());
NativeImp.DS_FreeStream(stream.GetNativePointer());
stream.Dispose();
}
/// <summary>
/// Use the Mozilla Voice STT model to perform Speech-To-Text.
/// Use the DeepSpeech model to perform Speech-To-Text.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
/// <returns>The STT result. Returns NULL on error.</returns>
public unsafe string SpeechToText(short[] aBuffer, uint aBufferSize)
{
return NativeImp.STT_SpeechToText(_modelStatePP, aBuffer, aBufferSize).PtrToString();
return NativeImp.DS_SpeechToText(_modelStatePP, aBuffer, aBufferSize).PtrToString();
}
/// <summary>
/// Use the Mozilla Voice STT model to perform Speech-To-Text, return results including metadata.
/// Use the DeepSpeech model to perform Speech-To-Text, return results including metadata.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
@ -251,7 +251,7 @@ namespace MozillaVoiceSttClient
/// <returns>The extended metadata. Returns NULL on error.</returns>
public unsafe Metadata SpeechToTextWithMetadata(short[] aBuffer, uint aBufferSize, uint aNumResults)
{
return NativeImp.STT_SpeechToTextWithMetadata(_modelStatePP, aBuffer, aBufferSize, aNumResults).PtrToMetadata();
return NativeImp.DS_SpeechToTextWithMetadata(_modelStatePP, aBuffer, aBufferSize, aNumResults).PtrToMetadata();
}
#endregion

Просмотреть файл

@ -0,0 +1,30 @@
namespace DeepSpeechClient.Enums
{
/// <summary>
/// Error codes from the native DeepSpeech binary.
/// </summary>
internal enum ErrorCodes
{
// OK
DS_ERR_OK = 0x0000,
// Missing invormations
DS_ERR_NO_MODEL = 0x1000,
// Invalid parameters
DS_ERR_INVALID_ALPHABET = 0x2000,
DS_ERR_INVALID_SHAPE = 0x2001,
DS_ERR_INVALID_SCORER = 0x2002,
DS_ERR_MODEL_INCOMPATIBLE = 0x2003,
DS_ERR_SCORER_NOT_ENABLED = 0x2004,
// Runtime failures
DS_ERR_FAIL_INIT_MMAP = 0x3000,
DS_ERR_FAIL_INIT_SESS = 0x3001,
DS_ERR_FAIL_INTERPRETER = 0x3002,
DS_ERR_FAIL_RUN_SESS = 0x3003,
DS_ERR_FAIL_CREATE_STREAM = 0x3004,
DS_ERR_FAIL_READ_PROTOBUF = 0x3005,
DS_ERR_FAIL_CREATE_SESS = 0x3006,
}
}

Просмотреть файл

@ -1,9 +1,9 @@
using MozillaVoiceSttClient.Structs;
using DeepSpeechClient.Structs;
using System;
using System.Runtime.InteropServices;
using System.Text;
namespace MozillaVoiceSttClient.Extensions
namespace DeepSpeechClient.Extensions
{
internal static class NativeExtensions
{
@ -20,7 +20,7 @@ namespace MozillaVoiceSttClient.Extensions
byte[] buffer = new byte[len];
Marshal.Copy(intPtr, buffer, 0, buffer.Length);
if (releasePtr)
NativeImp.STT_FreeString(intPtr);
NativeImp.DS_FreeString(intPtr);
string result = Encoding.UTF8.GetString(buffer);
return result;
}
@ -86,7 +86,7 @@ namespace MozillaVoiceSttClient.Extensions
metadata.transcripts += sizeOfCandidateTranscript;
}
NativeImp.STT_FreeMetadata(intPtr);
NativeImp.DS_FreeMetadata(intPtr);
return managedMetadata;
}
}

Просмотреть файл

@ -1,13 +1,13 @@
using MozillaVoiceSttClient.Models;
using DeepSpeechClient.Models;
using System;
using System.IO;
namespace MozillaVoiceSttClient.Interfaces
namespace DeepSpeechClient.Interfaces
{
/// <summary>
/// Client interface of Mozilla Voice STT.
/// Client interface of Mozilla's DeepSpeech implementation.
/// </summary>
public interface IMozillaVoiceSttModel : IDisposable
public interface IDeepSpeech : IDisposable
{
/// <summary>
/// Return version of this library. The returned version is a semantic version
@ -59,7 +59,7 @@ namespace MozillaVoiceSttClient.Interfaces
unsafe void SetScorerAlphaBeta(float aAlpha, float aBeta);
/// <summary>
/// Use the Mozilla Voice STT model to perform Speech-To-Text.
/// Use the DeepSpeech model to perform Speech-To-Text.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
@ -68,7 +68,7 @@ namespace MozillaVoiceSttClient.Interfaces
uint aBufferSize);
/// <summary>
/// Use the Mozilla Voice STT model to perform Speech-To-Text, return results including metadata.
/// Use the DeepSpeech model to perform Speech-To-Text, return results including metadata.
/// </summary>
/// <param name="aBuffer">A 16-bit, mono raw audio signal at the appropriate sample rate (matching what the model was trained on).</param>
/// <param name="aBufferSize">The number of samples in the audio signal.</param>
@ -83,26 +83,26 @@ namespace MozillaVoiceSttClient.Interfaces
/// This can be used if you no longer need the result of an ongoing streaming
/// inference and don't want to perform a costly decode operation.
/// </summary>
unsafe void FreeStream(MozillaVoiceSttStream stream);
unsafe void FreeStream(DeepSpeechStream stream);
/// <summary>
/// Creates a new streaming inference state.
/// </summary>
unsafe MozillaVoiceSttStream CreateStream();
unsafe DeepSpeechStream CreateStream();
/// <summary>
/// Feeds audio samples to an ongoing streaming inference.
/// </summary>
/// <param name="stream">Instance of the stream to feed the data.</param>
/// <param name="aBuffer">An array of 16-bit, mono raw audio samples at the appropriate sample rate (matching what the model was trained on).</param>
unsafe void FeedAudioContent(MozillaVoiceSttStream stream, short[] aBuffer, uint aBufferSize);
unsafe void FeedAudioContent(DeepSpeechStream stream, short[] aBuffer, uint aBufferSize);
/// <summary>
/// Computes the intermediate decoding of an ongoing streaming inference.
/// </summary>
/// <param name="stream">Instance of the stream to decode.</param>
/// <returns>The STT intermediate result.</returns>
unsafe string IntermediateDecode(MozillaVoiceSttStream stream);
unsafe string IntermediateDecode(DeepSpeechStream stream);
/// <summary>
/// Computes the intermediate decoding of an ongoing streaming inference, including metadata.
@ -110,14 +110,14 @@ namespace MozillaVoiceSttClient.Interfaces
/// <param name="stream">Instance of the stream to decode.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The extended metadata result.</returns>
unsafe Metadata IntermediateDecodeWithMetadata(MozillaVoiceSttStream stream, uint aNumResults);
unsafe Metadata IntermediateDecodeWithMetadata(DeepSpeechStream stream, uint aNumResults);
/// <summary>
/// Closes the ongoing streaming inference, returns the STT result over the whole audio signal.
/// </summary>
/// <param name="stream">Instance of the stream to finish.</param>
/// <returns>The STT result.</returns>
unsafe string FinishStream(MozillaVoiceSttStream stream);
unsafe string FinishStream(DeepSpeechStream stream);
/// <summary>
/// Closes the ongoing streaming inference, returns the STT result over the whole audio signal, including metadata.
@ -125,6 +125,6 @@ namespace MozillaVoiceSttClient.Interfaces
/// <param name="stream">Instance of the stream to finish.</param>
/// <param name="aNumResults">Maximum number of candidate transcripts to return. Returned list might be smaller than this.</param>
/// <returns>The extended metadata result.</returns>
unsafe Metadata FinishStreamWithMetadata(MozillaVoiceSttStream stream, uint aNumResults);
unsafe Metadata FinishStreamWithMetadata(DeepSpeechStream stream, uint aNumResults);
}
}

Просмотреть файл

@ -1,4 +1,4 @@
namespace MozillaVoiceSttClient.Models
namespace DeepSpeechClient.Models
{
/// <summary>
/// Stores the entire CTC output as an array of character metadata objects.

Просмотреть файл

@ -1,19 +1,19 @@
using System;
namespace MozillaVoiceSttClient.Models
namespace DeepSpeechClient.Models
{
/// <summary>
/// Wrapper of the pointer used for the decoding stream.
/// </summary>
public class MozillaVoiceSttStream : IDisposable
public class DeepSpeechStream : IDisposable
{
private unsafe IntPtr** _streamingStatePp;
/// <summary>
/// Initializes a new instance of <see cref="MozillaVoiceSttStream"/>.
/// Initializes a new instance of <see cref="DeepSpeechStream"/>.
/// </summary>
/// <param name="streamingStatePP">Native pointer of the native stream.</param>
public unsafe MozillaVoiceSttStream(IntPtr** streamingStatePP)
public unsafe DeepSpeechStream(IntPtr** streamingStatePP)
{
_streamingStatePp = streamingStatePP;
}

Просмотреть файл

@ -1,4 +1,4 @@
namespace MozillaVoiceSttClient.Models
namespace DeepSpeechClient.Models
{
/// <summary>
/// Stores the entire CTC output as an array of character metadata objects.

Просмотреть файл

@ -1,4 +1,4 @@
namespace MozillaVoiceSttClient.Models
namespace DeepSpeechClient.Models
{
/// <summary>
/// Stores each individual character, along with its timing information.

Просмотреть файл

@ -0,0 +1,102 @@
using DeepSpeechClient.Enums;
using System;
using System.Runtime.InteropServices;
namespace DeepSpeechClient
{
/// <summary>
/// Wrapper for the native implementation of "libdeepspeech.so"
/// </summary>
internal static class NativeImp
{
#region Native Implementation
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static extern IntPtr DS_Version();
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes DS_CreateModel(string aModelPath,
ref IntPtr** pint);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern IntPtr DS_ErrorCodeToErrorMessage(int aErrorCode);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern uint DS_GetModelBeamWidth(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes DS_SetModelBeamWidth(IntPtr** aCtx,
uint aBeamWidth);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes DS_CreateModel(string aModelPath,
uint aBeamWidth,
ref IntPtr** pint);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern int DS_GetModelSampleRate(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_EnableExternalScorer(IntPtr** aCtx,
string aScorerPath);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_DisableExternalScorer(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_SetScorerAlphaBeta(IntPtr** aCtx,
float aAlpha,
float aBeta);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr DS_SpeechToText(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl, SetLastError = true)]
internal static unsafe extern IntPtr DS_SpeechToTextWithMetadata(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize,
uint aNumResults);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeModel(IntPtr** aCtx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes DS_CreateStream(IntPtr** aCtx,
ref IntPtr** retval);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeStream(IntPtr** aSctx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeMetadata(IntPtr metadata);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void DS_FreeString(IntPtr str);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern void DS_FeedAudioContent(IntPtr** aSctx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr DS_IntermediateDecode(IntPtr** aSctx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr DS_IntermediateDecodeWithMetadata(IntPtr** aSctx,
uint aNumResults);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr DS_FinishStream(IntPtr** aSctx);
[DllImport("libdeepspeech.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr DS_FinishStreamWithMetadata(IntPtr** aSctx,
uint aNumResults);
#endregion
}
}

Просмотреть файл

@ -1,7 +1,7 @@
using System;
using System.Runtime.InteropServices;
namespace MozillaVoiceSttClient.Structs
namespace DeepSpeechClient.Structs
{
[StructLayout(LayoutKind.Sequential)]
internal unsafe struct CandidateTranscript

Просмотреть файл

@ -1,7 +1,7 @@
using System;
using System.Runtime.InteropServices;
namespace MozillaVoiceSttClient.Structs
namespace DeepSpeechClient.Structs
{
[StructLayout(LayoutKind.Sequential)]
internal unsafe struct Metadata

Просмотреть файл

@ -1,7 +1,7 @@
using System;
using System.Runtime.InteropServices;
namespace MozillaVoiceSttClient.Structs
namespace DeepSpeechClient.Structs
{
[StructLayout(LayoutKind.Sequential)]
internal unsafe struct TokenMetadata

Просмотреть файл

@ -6,8 +6,8 @@
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProjectGuid>{312965E5-C4F6-4D95-BA64-79906B8BC7AC}</ProjectGuid>
<OutputType>Exe</OutputType>
<RootNamespace>MozillaVoiceSttConsole</RootNamespace>
<AssemblyName>MozillaVoiceSttConsole</AssemblyName>
<RootNamespace>DeepSpeechConsole</RootNamespace>
<AssemblyName>DeepSpeechConsole</AssemblyName>
<TargetFrameworkVersion>v4.6.2</TargetFrameworkVersion>
<FileAlignment>512</FileAlignment>
<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
@ -56,9 +56,9 @@
<None Include="packages.config" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\MozillaVoiceSttClient\MozillaVoiceSttClient.csproj">
<ProjectReference Include="..\DeepSpeechClient\DeepSpeechClient.csproj">
<Project>{56DE4091-BBBE-47E4-852D-7268B33B971F}</Project>
<Name>MozillaVoiceSttClient</Name>
<Name>DeepSpeechClient</Name>
</ProjectReference>
</ItemGroup>
<ItemGroup>

Просмотреть файл

@ -1,6 +1,6 @@
using MozillaVoiceSttClient;
using MozillaVoiceSttClient.Interfaces;
using MozillaVoiceSttClient.Models;
using DeepSpeechClient;
using DeepSpeechClient.Interfaces;
using DeepSpeechClient.Models;
using NAudio.Wave;
using System;
using System.Collections.Generic;
@ -52,7 +52,7 @@ namespace CSharpExamples
Console.WriteLine("Loading model...");
stopwatch.Start();
// sphinx-doc: csharp_ref_model_start
using (IMozillaVoiceSttModel sttClient = new MozillaVoiceSttModel(model ?? "output_graph.pbmm"))
using (IDeepSpeech sttClient = new DeepSpeech(model ?? "output_graph.pbmm"))
{
// sphinx-doc: csharp_ref_model_stop
stopwatch.Stop();

Просмотреть файл

@ -5,7 +5,7 @@ using System.Runtime.InteropServices;
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("MozillaVoiceSttConsole")]
[assembly: AssemblyTitle("DeepSpeechConsole")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]

Просмотреть файл

Просмотреть файл

@ -1,8 +1,8 @@
<Application
x:Class="MozillaVoiceSttWPF.App"
x:Class="DeepSpeechWPF.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:MozillaVoiceSttWPF"
xmlns:local="clr-namespace:DeepSpeechWPF"
StartupUri="MainWindow.xaml">
<Application.Resources />
</Application>

Просмотреть файл

@ -1,10 +1,10 @@
using CommonServiceLocator;
using MozillaVoiceStt.WPF.ViewModels;
using MozillaVoiceSttClient.Interfaces;
using DeepSpeech.WPF.ViewModels;
using DeepSpeechClient.Interfaces;
using GalaSoft.MvvmLight.Ioc;
using System.Windows;
namespace MozillaVoiceSttWPF
namespace DeepSpeechWPF
{
/// <summary>
/// Interaction logic for App.xaml
@ -18,11 +18,11 @@ namespace MozillaVoiceSttWPF
try
{
//Register instance of Mozilla Voice STT
MozillaVoiceSttClient.MozillaVoiceSttModel client =
new MozillaVoiceSttClient.MozillaVoiceSttModel("deepspeech-0.8.0-models.pbmm");
//Register instance of DeepSpeech
DeepSpeechClient.DeepSpeech deepSpeechClient =
new DeepSpeechClient.DeepSpeech("deepspeech-0.8.0-models.pbmm");
SimpleIoc.Default.Register<IMozillaVoiceSttModel>(() => client);
SimpleIoc.Default.Register<IDeepSpeech>(() => deepSpeechClient);
SimpleIoc.Default.Register<MainWindowViewModel>();
}
catch (System.Exception ex)
@ -35,8 +35,8 @@ namespace MozillaVoiceSttWPF
protected override void OnExit(ExitEventArgs e)
{
base.OnExit(e);
//Dispose instance of Mozilla Voice STT
ServiceLocator.Current.GetInstance<IMozillaVoiceSttModel>()?.Dispose();
//Dispose instance of DeepSpeech
ServiceLocator.Current.GetInstance<IDeepSpeech>()?.Dispose();
}
}
}

Просмотреть файл

@ -6,8 +6,8 @@
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProjectGuid>{54BFD766-4305-4F4C-BA59-AF45505DF3C1}</ProjectGuid>
<OutputType>WinExe</OutputType>
<RootNamespace>MozillaVoiceStt.WPF</RootNamespace>
<AssemblyName>MozillaVoiceStt.WPF</AssemblyName>
<RootNamespace>DeepSpeech.WPF</RootNamespace>
<AssemblyName>DeepSpeech.WPF</AssemblyName>
<TargetFrameworkVersion>v4.6.2</TargetFrameworkVersion>
<FileAlignment>512</FileAlignment>
<ProjectTypeGuids>{60dc8134-eba5-43b8-bcc9-bb4bc16c2548};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
@ -131,9 +131,9 @@
<None Include="App.config" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\MozillaVoiceSttClient\MozillaVoiceSttClient.csproj">
<ProjectReference Include="..\DeepSpeechClient\DeepSpeechClient.csproj">
<Project>{56de4091-bbbe-47e4-852d-7268b33b971f}</Project>
<Name>MozillaVoiceSttClient</Name>
<Name>DeepSpeechClient</Name>
</ProjectReference>
</ItemGroup>
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

Просмотреть файл

@ -3,9 +3,9 @@ Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.28307.421
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceStt.WPF", "MozillaVoiceStt.WPF.csproj", "{54BFD766-4305-4F4C-BA59-AF45505DF3C1}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeech.WPF", "DeepSpeech.WPF.csproj", "{54BFD766-4305-4F4C-BA59-AF45505DF3C1}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MozillaVoiceSttClient", "..\MozillaVoiceSttClient\MozillaVoiceSttClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DeepSpeechClient", "..\DeepSpeechClient\DeepSpeechClient.csproj", "{56DE4091-BBBE-47E4-852D-7268B33B971F}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution

Просмотреть файл

@ -1,10 +1,10 @@
<Window
x:Class="MozillaVoiceSttWPF.MainWindow"
x:Class="DeepSpeechWPF.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
Title="Mozilla Voice STT Client"
Title="Deepspeech client"
Width="800"
Height="600"
Loaded="Window_Loaded"

Просмотреть файл

@ -1,8 +1,8 @@
using CommonServiceLocator;
using MozillaVoiceStt.WPF.ViewModels;
using DeepSpeech.WPF.ViewModels;
using System.Windows;
namespace MozillaVoiceSttWPF
namespace DeepSpeechWPF
{
/// <summary>
/// Interaction logic for MainWindow.xaml

Просмотреть файл

@ -7,11 +7,11 @@ using System.Windows;
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("MozillaVoiceStt.WPF")]
[assembly: AssemblyTitle("DeepSpeech.WPF")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]
[assembly: AssemblyProduct("MozillaVoiceStt.WPF.SingleFiles")]
[assembly: AssemblyProduct("DeepSpeech.WPF.SingleFiles")]
[assembly: AssemblyCopyright("Copyright © 2018")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]

Просмотреть файл

@ -8,7 +8,7 @@
// </auto-generated>
//------------------------------------------------------------------------------
namespace MozillaVoiceStt.WPF.Properties {
namespace DeepSpeech.WPF.Properties {
using System;
@ -39,7 +39,7 @@ namespace MozillaVoiceStt.WPF.Properties {
internal static global::System.Resources.ResourceManager ResourceManager {
get {
if (object.ReferenceEquals(resourceMan, null)) {
global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("MozillaVoiceStt.WPF.Properties.Resources", typeof(Resources).Assembly);
global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("DeepSpeech.WPF.Properties.Resources", typeof(Resources).Assembly);
resourceMan = temp;
}
return resourceMan;

Просмотреть файл

@ -8,7 +8,7 @@
// </auto-generated>
//------------------------------------------------------------------------------
namespace MozillaVoiceStt.WPF.Properties {
namespace DeepSpeech.WPF.Properties {
[global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()]

Просмотреть файл

@ -3,7 +3,7 @@ using System.Collections.Generic;
using System.ComponentModel;
using System.Runtime.CompilerServices;
namespace MozillaVoiceStt.WPF.ViewModels
namespace DeepSpeech.WPF.ViewModels
{
/// <summary>
/// Implementation of <see cref="INotifyPropertyChanged"/> to simplify models.

Просмотреть файл

@ -3,8 +3,8 @@ using CSCore;
using CSCore.CoreAudioAPI;
using CSCore.SoundIn;
using CSCore.Streams;
using MozillaVoiceSttClient.Interfaces;
using MozillaVoiceSttClient.Models;
using DeepSpeechClient.Interfaces;
using DeepSpeechClient.Models;
using GalaSoft.MvvmLight.CommandWpf;
using Microsoft.Win32;
using System;
@ -15,7 +15,7 @@ using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace MozillaVoiceStt.WPF.ViewModels
namespace DeepSpeech.WPF.ViewModels
{
/// <summary>
/// View model of the MainWindow View.
@ -27,7 +27,7 @@ namespace MozillaVoiceStt.WPF.ViewModels
private const string ScorerPath = "kenlm.scorer";
#endregion
private readonly IMozillaVoiceSttModel _sttClient;
private readonly IDeepSpeech _sttClient;
#region Commands
/// <summary>
@ -62,7 +62,7 @@ namespace MozillaVoiceStt.WPF.ViewModels
/// <summary>
/// Stream used to feed data into the acoustic model.
/// </summary>
private MozillaVoiceSttStream _sttStream;
private DeepSpeechStream _sttStream;
/// <summary>
/// Records the audio of the selected device.
@ -75,7 +75,7 @@ namespace MozillaVoiceStt.WPF.ViewModels
private SoundInSource _soundInSource;
/// <summary>
/// Target wave source.(16KHz Mono 16bit for Mozilla Voice STT)
/// Target wave source.(16KHz Mono 16bit for DeepSpeech)
/// </summary>
private IWaveSource _convertedSource;
@ -200,7 +200,7 @@ namespace MozillaVoiceStt.WPF.ViewModels
#endregion
#region Ctors
public MainWindowViewModel(IMozillaVoiceSttModel sttClient)
public MainWindowViewModel(IDeepSpeech sttClient)
{
_sttClient = sttClient;
@ -290,8 +290,7 @@ namespace MozillaVoiceStt.WPF.ViewModels
//read data from the converedSource
//important: don't use the e.Data here
//the e.Data contains the raw data provided by the
//soundInSource which won't have the Mozilla Voice STT required
// audio format
//soundInSource which won't have the deepspeech required audio format
byte[] buffer = new byte[_convertedSource.WaveFormat.BytesPerSecond / 2];
int read;

Просмотреть файл

@ -1,29 +0,0 @@
namespace MozillaVoiceSttClient.Enums
{
/// <summary>
/// Error codes from the native Mozilla Voice STT binary.
/// </summary>
internal enum ErrorCodes
{
STT_ERR_OK = 0x0000,
STT_ERR_NO_MODEL = 0x1000,
STT_ERR_INVALID_ALPHABET = 0x2000,
STT_ERR_INVALID_SHAPE = 0x2001,
STT_ERR_INVALID_SCORER = 0x2002,
STT_ERR_MODEL_INCOMPATIBLE = 0x2003,
STT_ERR_SCORER_NOT_ENABLED = 0x2004,
STT_ERR_SCORER_UNREADABLE = 0x2005,
STT_ERR_SCORER_INVALID_LM = 0x2006,
STT_ERR_SCORER_NO_TRIE = 0x2007,
STT_ERR_SCORER_INVALID_TRIE = 0x2008,
STT_ERR_SCORER_VERSION_MISMATCH = 0x2009,
STT_ERR_FAIL_INIT_MMAP = 0x3000,
STT_ERR_FAIL_INIT_SESS = 0x3001,
STT_ERR_FAIL_INTERPRETER = 0x3002,
STT_ERR_FAIL_RUN_SESS = 0x3003,
STT_ERR_FAIL_CREATE_STREAM = 0x3004,
STT_ERR_FAIL_READ_PROTOBUF = 0x3005,
STT_ERR_FAIL_CREATE_SESS = 0x3006,
STT_ERR_FAIL_CREATE_MODEL = 0x3007,
}
}

Просмотреть файл

@ -1,102 +0,0 @@
using MozillaVoiceSttClient.Enums;
using System;
using System.Runtime.InteropServices;
namespace MozillaVoiceSttClient
{
/// <summary>
/// Wrapper for the native implementation of "libmozilla_voice_stt.so"
/// </summary>
internal static class NativeImp
{
#region Native Implementation
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static extern IntPtr STT_Version();
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes STT_CreateModel(string aModelPath,
ref IntPtr** pint);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern IntPtr STT_ErrorCodeToErrorMessage(int aErrorCode);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern uint STT_GetModelBeamWidth(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes STT_SetModelBeamWidth(IntPtr** aCtx,
uint aBeamWidth);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern ErrorCodes STT_CreateModel(string aModelPath,
uint aBeamWidth,
ref IntPtr** pint);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal unsafe static extern int STT_GetModelSampleRate(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_EnableExternalScorer(IntPtr** aCtx,
string aScorerPath);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_DisableExternalScorer(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_SetScorerAlphaBeta(IntPtr** aCtx,
float aAlpha,
float aBeta);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr STT_SpeechToText(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl, SetLastError = true)]
internal static unsafe extern IntPtr STT_SpeechToTextWithMetadata(IntPtr** aCtx,
short[] aBuffer,
uint aBufferSize,
uint aNumResults);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeModel(IntPtr** aCtx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern ErrorCodes STT_CreateStream(IntPtr** aCtx,
ref IntPtr** retval);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeStream(IntPtr** aSctx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeMetadata(IntPtr metadata);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern void STT_FreeString(IntPtr str);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern void STT_FeedAudioContent(IntPtr** aSctx,
short[] aBuffer,
uint aBufferSize);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr STT_IntermediateDecode(IntPtr** aSctx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr STT_IntermediateDecodeWithMetadata(IntPtr** aSctx,
uint aNumResults);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl,
CharSet = CharSet.Ansi, SetLastError = true)]
internal static unsafe extern IntPtr STT_FinishStream(IntPtr** aSctx);
[DllImport("libmozilla_voice_stt.so", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern IntPtr STT_FinishStreamWithMetadata(IntPtr** aSctx,
uint aNumResults);
#endregion
}
}

Просмотреть файл

@ -1,8 +1,8 @@
Building Mozilla Voice STT native client for Windows
Building DeepSpeech native client for Windows
=============================================
Now we can build the native client of Mozilla Voice STT and run inference on Windows using the C# client, to do that we need to compile the ``native_client``.
Now we can build the native client of DeepSpeech and run inference on Windows using the C# client, to do that we need to compile the ``native_client``.
**Table of Contents**
@ -59,8 +59,8 @@ There should already be a symbolic link, for this example let's suppose that we
.
├── D:\
│ ├── cloned # Contains Mozilla Voice STT and tensorflow side by side
│ │ └── DeepSpeech # Root of the cloned Mozilla Voice STT
│ ├── cloned # Contains DeepSpeech and tensorflow side by side
│ │ └── DeepSpeech # Root of the cloned DeepSpeech
│ │ ├── tensorflow # Root of the cloned Mozilla's tensorflow
└── ...
@ -126,7 +126,7 @@ We will add AVX/AVX2 support in the command, please make sure that your CPU supp
.. code-block:: bash
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libmozilla_voice_stt.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
GPU with CUDA
~~~~~~~~~~~~~
@ -135,11 +135,11 @@ If you enabled CUDA in `configure.py <https://github.com/mozilla/tensorflow/blob
.. code-block:: bash
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --config=cuda --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libmozilla_voice_stt.so
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --config=cuda --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
Be patient, if you enabled AVX/AVX2 and CUDA it will take a long time. Finally you should see it stops and shows the path to the generated ``libmozilla_voice_stt.so``.
Be patient, if you enabled AVX/AVX2 and CUDA it will take a long time. Finally you should see it stops and shows the path to the generated ``libdeepspeech.so``.
Using the generated library
---------------------------
As for now we can only use the generated ``libmozilla_voice_stt.so`` with the C# clients, go to `native_client/dotnet/ <https://github.com/mozilla/DeepSpeech/tree/master/native_client/dotnet>`_ in your Mozilla Voice STT directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy ``libmozilla_voice_stt.so`` to the generated ``x64/Debug`` or ``x64/Release`` directory.
As for now we can only use the generated ``libdeepspeech.so`` with the C# clients, go to `native_client/dotnet/ <https://github.com/mozilla/DeepSpeech/tree/master/native_client/dotnet>`_ in your DeepSpeech directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy ``libdeepspeech.so`` to the generated ``x64/Debug`` or ``x64/Release`` directory.

Просмотреть файл

@ -3,13 +3,13 @@
<metadata>
<id>$NUPKG_ID</id>
<version>$NUPKG_VERSION</version>
<title>Mozilla.Voice.STT</title>
<title>DeepSpeech</title>
<authors>Mozilla</authors>
<owners>Mozilla</owners>
<license type="expression">MPL-2.0</license>
<projectUrl>http://github.com/mozilla/DeepSpeech</projectUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>A library for running inference with a Mozilla Voice STT model</description>
<description>A library for running inference with a DeepSpeech model</description>
<copyright>Copyright (c) 2019 Mozilla Corporation</copyright>
<tags>native speech speech_recognition</tags>
</metadata>

Просмотреть файл

@ -11,7 +11,7 @@ using namespace std;
#include "ctcdecode/decoder_utils.h"
#include "ctcdecode/scorer.h"
#include "alphabet.h"
#include "mozilla_voice_stt.h"
#include "deepspeech.h"
namespace po = boost::program_options;
@ -66,9 +66,9 @@ create_package(absl::optional<string> alphabet_path,
scorer.set_utf8_mode(force_utf8.value());
scorer.reset_params(default_alpha, default_beta);
int err = scorer.load_lm(lm_path);
if (err != STT_ERR_SCORER_NO_TRIE) {
if (err != DS_ERR_SCORER_NO_TRIE) {
cerr << "Error loading language model file: "
<< STT_ErrorCodeToErrorMessage(err) << "\n";
<< DS_ErrorCodeToErrorMessage(err) << "\n";
return 1;
}
scorer.fill_dictionary(words);

Просмотреть файл

@ -2,7 +2,7 @@
include ../definitions.mk
ARCHS := $(shell grep 'ABI_FILTERS' libmozillavoicestt/gradle.properties | cut -d'=' -f2 | sed -e 's/;/ /g')
ARCHS := $(shell grep 'ABI_FILTERS' libdeepspeech/gradle.properties | cut -d'=' -f2 | sed -e 's/;/ /g')
GRADLE ?= ./gradlew
all: apk
@ -14,13 +14,13 @@ apk-clean:
$(GRADLE) clean
libs-clean:
rm -fr libmozillavoicestt/libs/*/libmozilla_voice_stt.so
rm -fr libdeepspeech/libs/*/libdeepspeech.so
libmozillavoicestt/libs/%/libmozilla_voice_stt.so:
-mkdir libmozillavoicestt/libs/$*/
cp ${TFDIR}/bazel-out/$*-*/bin/native_client/libmozilla_voice_stt.so libmozillavoicestt/libs/$*/
libdeepspeech/libs/%/libdeepspeech.so:
-mkdir libdeepspeech/libs/$*/
cp ${TFDIR}/bazel-out/$*-*/bin/native_client/libdeepspeech.so libdeepspeech/libs/$*/
apk: apk-clean bindings $(patsubst %,libmozillavoicestt/libs/%/libmozilla_voice_stt.so,$(ARCHS))
apk: apk-clean bindings $(patsubst %,libdeepspeech/libs/%/libdeepspeech.so,$(ARCHS))
$(GRADLE) build
maven-bundle: apk
@ -28,4 +28,4 @@ maven-bundle: apk
$(GRADLE) zipMavenArtifacts
bindings: clean ds-swig
$(DS_SWIG_ENV) swig -c++ -java -package org.mozilla.voice.stt -outdir libmozillavoicestt/src/main/java/org/mozilla/voice/stt/ -o jni/deepspeech_wrap.cpp jni/deepspeech.i
$(DS_SWIG_ENV) swig -c++ -java -package org.mozilla.deepspeech.libdeepspeech -outdir libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech/ -o jni/deepspeech_wrap.cpp jni/deepspeech.i

Просмотреть файл

@ -4,7 +4,7 @@ android {
compileSdkVersion 27
defaultConfig {
applicationId "org.mozilla.voice.sttapp"
applicationId "org.mozilla.deepspeech"
minSdkVersion 21
targetSdkVersion 27
versionName androidGitVersion.name()
@ -28,7 +28,7 @@ android {
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation project(':libmozillavoicestt')
implementation project(':libdeepspeech')
implementation 'com.android.support:appcompat-v7:27.1.1'
implementation 'com.android.support.constraint:constraint-layout:1.1.3'
testImplementation 'junit:junit:4.12'

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.voice.sttapp;
package org.mozilla.deepspeech;
import android.content.Context;
import android.support.test.InstrumentationRegistry;
@ -21,6 +21,6 @@ public class ExampleInstrumentedTest {
// Context of the app under test.
Context appContext = InstrumentationRegistry.getTargetContext();
assertEquals("org.mozilla.voice.sttapp", appContext.getPackageName());
assertEquals("org.mozilla.deepspeech", appContext.getPackageName());
}
}

Просмотреть файл

@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="org.mozilla.voice.sttapp">
package="org.mozilla.deepspeech">
<application
android:allowBackup="true"
@ -9,7 +9,7 @@
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MozillaVoiceSttActivity">
<activity android:name=".DeepSpeechActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.voice.sttapp;
package org.mozilla.deepspeech;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
@ -16,11 +16,11 @@ import java.io.IOException;
import java.nio.ByteOrder;
import java.nio.ByteBuffer;
import org.mozilla.voice.stt.MozillaVoiceSttModel;
import org.mozilla.deepspeech.libdeepspeech.DeepSpeechModel;
public class MozillaVoiceSttActivity extends AppCompatActivity {
public class DeepSpeechActivity extends AppCompatActivity {
MozillaVoiceSttModel _m = null;
DeepSpeechModel _m = null;
EditText _tfliteModel;
EditText _audioFile;
@ -50,7 +50,7 @@ public class MozillaVoiceSttActivity extends AppCompatActivity {
this._tfliteStatus.setText("Creating model");
if (this._m == null) {
// sphinx-doc: java_ref_model_start
this._m = new MozillaVoiceSttModel(tfliteModel);
this._m = new DeepSpeechModel(tfliteModel);
this._m.setBeamWidth(BEAM_WIDTH);
// sphinx-doc: java_ref_model_stop
}

Просмотреть файл

@ -4,7 +4,7 @@
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MozillaVoiceSttActivity">
tools:context=".DeepSpeechActivity">
<!--
<TextView

Просмотреть файл

@ -1,3 +1,3 @@
<resources>
<string name="app_name">Mozilla Voice STT</string>
<string name="app_name">DeepSpeech</string>
</resources>

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.voice.stt;
package org.mozilla.deepspeech;
import org.junit.Test;

Просмотреть файл

@ -2,7 +2,7 @@
%{
#define SWIG_FILE_WITH_INIT
#include "../../mozilla_voice_stt.h"
#include "../../deepspeech.h"
%}
%include "typemaps.i"
@ -10,7 +10,7 @@
%javaconst(1);
%include "arrays_java.i"
// apply to STT_FeedAudioContent and STT_SpeechToText
// apply to DS_FeedAudioContent and DS_SpeechToText
%apply short[] { short* };
%include "cpointer.i"
@ -43,7 +43,7 @@
}
~Metadata() {
STT_FreeMetadata(self);
DS_FreeMetadata(self);
}
}
@ -54,13 +54,13 @@
%nodefaultctor TokenMetadata;
%nodefaultdtor TokenMetadata;
%typemap(newfree) char* "STT_FreeString($1);";
%newobject STT_SpeechToText;
%newobject STT_IntermediateDecode;
%newobject STT_FinishStream;
%newobject STT_ErrorCodeToErrorMessage;
%typemap(newfree) char* "DS_FreeString($1);";
%newobject DS_SpeechToText;
%newobject DS_IntermediateDecode;
%newobject DS_FinishStream;
%newobject DS_ErrorCodeToErrorMessage;
%rename ("%(strip:[STT_])s") "";
%rename ("%(strip:[DS_])s") "";
// make struct members camel case to suit Java conventions
%rename ("%(camelcase)s", %$ismember) "";
@ -71,4 +71,4 @@
%ignore "Metadata::transcripts";
%ignore "CandidateTranscript::tokens";
%include "../mozilla_voice_stt.h"
%include "../deepspeech.h"

Просмотреть файл

Просмотреть файл

@ -26,12 +26,12 @@ add_library( deepspeech-lib
set_target_properties( deepspeech-lib
PROPERTIES
IMPORTED_LOCATION
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libmozilla_voice_stt.so )
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libdeepspeech.so )
add_custom_command( TARGET deepspeech-jni POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libmozilla_voice_stt.so
${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libmozilla_voice_stt.so )
${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI}/libdeepspeech.so
${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/libdeepspeech.so )
# Searches for a specified prebuilt library and stores the path as a

Просмотреть файл

@ -44,9 +44,9 @@ android {
installOptions "-d","-t"
}
// Avoid scanning stt_doc
// Avoid scanning libdeepspeech_doc
sourceSets {
main.java.srcDirs = [ 'src/main/java/org/mozilla/voice/stt/' ]
main.java.srcDirs = [ 'src/main/java/org/mozilla/deepspeech/libdeepspeech/' ]
}
}
@ -76,9 +76,9 @@ uploadArchives {
repositories {
mavenDeployer {
pom.packaging = 'aar'
pom.name = 'stt'
pom.groupId = 'org.mozilla.voice'
pom.artifactId = 'stt'
pom.name = 'libdeepspeech'
pom.groupId = 'org.mozilla.deepspeech'
pom.artifactId = 'libdeepspeech'
pom.version = dsVersionString + (project.hasProperty('snapshot') ? '-SNAPSHOT' : '')
pom.project {
@ -95,8 +95,8 @@ uploadArchives {
developers {
developer {
id 'mozillavoicestt'
name 'Mozilla Voice STT Team'
id 'deepspeech'
name 'Mozilla DeepSpeech Team'
email 'deepspeechs@lists.mozilla.org'
}
}

Просмотреть файл

Просмотреть файл

@ -1,4 +1,4 @@
package org.mozilla.voice.stt.test;
package org.mozilla.deepspeech.libdeepspeech.test;
import android.content.Context;
import android.support.test.InstrumentationRegistry;
@ -11,8 +11,8 @@ import org.junit.runners.MethodSorters;
import static org.junit.Assert.*;
import org.mozilla.voice.stt.MozillaVoiceSttModel;
import org.mozilla.voice.stt.CandidateTranscript;
import org.mozilla.deepspeech.libdeepspeech.DeepSpeechModel;
import org.mozilla.deepspeech.libdeepspeech.CandidateTranscript;
import java.io.RandomAccessFile;
import java.io.FileNotFoundException;
@ -52,12 +52,12 @@ public class BasicTest {
// Context of the app under test.
Context appContext = InstrumentationRegistry.getTargetContext();
assertEquals("org.mozilla.voice.stt.test", appContext.getPackageName());
assertEquals("org.mozilla.deepspeech.libdeepspeech.test", appContext.getPackageName());
}
@Test
public void loadDeepSpeech_basic() {
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
DeepSpeechModel m = new DeepSpeechModel(modelFile);
m.freeModel();
}
@ -69,7 +69,7 @@ public class BasicTest {
return retval;
}
private String doSTT(MozillaVoiceSttModel m, boolean extendedMetadata) {
private String doSTT(DeepSpeechModel m, boolean extendedMetadata) {
try {
RandomAccessFile wave = new RandomAccessFile(wavFile, "r");
@ -114,7 +114,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_stt_noLM() {
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
DeepSpeechModel m = new DeepSpeechModel(modelFile);
String decoded = doSTT(m, false);
assertEquals("she had your dark suit in greasy wash water all year", decoded);
@ -123,7 +123,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_stt_withLM() {
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
DeepSpeechModel m = new DeepSpeechModel(modelFile);
m.enableExternalScorer(scorerFile);
String decoded = doSTT(m, false);
@ -133,7 +133,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_sttWithMetadata_noLM() {
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
DeepSpeechModel m = new DeepSpeechModel(modelFile);
String decoded = doSTT(m, true);
assertEquals("she had your dark suit in greasy wash water all year", decoded);
@ -142,7 +142,7 @@ public class BasicTest {
@Test
public void loadDeepSpeech_sttWithMetadata_withLM() {
MozillaVoiceSttModel m = new MozillaVoiceSttModel(modelFile);
DeepSpeechModel m = new DeepSpeechModel(modelFile);
m.enableExternalScorer(scorerFile);
String decoded = doSTT(m, true);

Просмотреть файл

@ -1,2 +1,2 @@
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="org.mozilla.voice.stt" />
package="org.mozilla.deepspeech.libdeepspeech" />

Просмотреть файл

@ -1,13 +1,13 @@
package org.mozilla.voice.stt;
package org.mozilla.deepspeech.libdeepspeech;
/**
* @brief Exposes a Mozilla Voice STT model in Java
* @brief Exposes a DeepSpeech model in Java
**/
public class MozillaVoiceSttModel {
public class DeepSpeechModel {
static {
System.loadLibrary("deepspeech-jni");
System.loadLibrary("mozilla_voice_stt");
System.loadLibrary("deepspeech");
}
// FIXME: We should have something better than those SWIGTYPE_*
@ -15,14 +15,14 @@ public class MozillaVoiceSttModel {
private SWIGTYPE_p_ModelState _msp;
private void evaluateErrorCode(int errorCode) {
Error_Codes code = Error_Codes.swigToEnum(errorCode);
if (code != Error_Codes.ERR_OK) {
DeepSpeech_Error_Codes code = DeepSpeech_Error_Codes.swigToEnum(errorCode);
if (code != DeepSpeech_Error_Codes.ERR_OK) {
throw new RuntimeException("Error: " + impl.ErrorCodeToErrorMessage(errorCode) + " (0x" + Integer.toHexString(errorCode) + ").");
}
}
/**
* @brief An object providing an interface to a trained Mozilla Voice STT model.
* @brief An object providing an interface to a trained DeepSpeech model.
*
* @constructor
*
@ -30,7 +30,7 @@ public class MozillaVoiceSttModel {
*
* @throws RuntimeException on failure.
*/
public MozillaVoiceSttModel(String modelPath) {
public DeepSpeechModel(String modelPath) {
this._mspp = impl.new_modelstatep();
evaluateErrorCode(impl.CreateModel(modelPath, this._mspp));
this._msp = impl.modelstatep_value(this._mspp);
@ -107,7 +107,7 @@ public class MozillaVoiceSttModel {
}
/*
* @brief Use the Mozilla Voice STT model to perform Speech-To-Text.
* @brief Use the DeepSpeech model to perform Speech-To-Text.
*
* @param buffer A 16-bit, mono raw audio signal at the appropriate
* sample rate (matching what the model was trained on).
@ -120,7 +120,7 @@ public class MozillaVoiceSttModel {
}
/**
* @brief Use the Mozilla Voice STT model to perform Speech-To-Text and output metadata
* @brief Use the DeepSpeech model to perform Speech-To-Text and output metadata
* about the results.
*
* @param buffer A 16-bit, mono raw audio signal at the appropriate
@ -144,10 +144,10 @@ public class MozillaVoiceSttModel {
*
* @throws RuntimeException on failure.
*/
public MozillaVoiceSttStreamingState createStream() {
public DeepSpeechStreamingState createStream() {
SWIGTYPE_p_p_StreamingState ssp = impl.new_streamingstatep();
evaluateErrorCode(impl.CreateStream(this._msp, ssp));
return new MozillaVoiceSttStreamingState(impl.streamingstatep_value(ssp));
return new DeepSpeechStreamingState(impl.streamingstatep_value(ssp));
}
/**
@ -158,7 +158,7 @@ public class MozillaVoiceSttModel {
* appropriate sample rate (matching what the model was trained on).
* @param buffer_size The number of samples in @p buffer.
*/
public void feedAudioContent(MozillaVoiceSttStreamingState ctx, short[] buffer, int buffer_size) {
public void feedAudioContent(DeepSpeechStreamingState ctx, short[] buffer, int buffer_size) {
impl.FeedAudioContent(ctx.get(), buffer, buffer_size);
}
@ -169,7 +169,7 @@ public class MozillaVoiceSttModel {
*
* @return The STT intermediate result.
*/
public String intermediateDecode(MozillaVoiceSttStreamingState ctx) {
public String intermediateDecode(DeepSpeechStreamingState ctx) {
return impl.IntermediateDecode(ctx.get());
}
@ -181,7 +181,7 @@ public class MozillaVoiceSttModel {
*
* @return The STT intermediate result.
*/
public Metadata intermediateDecodeWithMetadata(MozillaVoiceSttStreamingState ctx, int num_results) {
public Metadata intermediateDecodeWithMetadata(DeepSpeechStreamingState ctx, int num_results) {
return impl.IntermediateDecodeWithMetadata(ctx.get(), num_results);
}
@ -195,7 +195,7 @@ public class MozillaVoiceSttModel {
*
* @note This method will free the state pointer (@p ctx).
*/
public String finishStream(MozillaVoiceSttStreamingState ctx) {
public String finishStream(DeepSpeechStreamingState ctx) {
return impl.FinishStream(ctx.get());
}
@ -212,7 +212,7 @@ public class MozillaVoiceSttModel {
*
* @note This method will free the state pointer (@p ctx).
*/
public Metadata finishStreamWithMetadata(MozillaVoiceSttStreamingState ctx, int num_results) {
public Metadata finishStreamWithMetadata(DeepSpeechStreamingState ctx, int num_results) {
return impl.FinishStreamWithMetadata(ctx.get(), num_results);
}
}

Просмотреть файл

@ -0,0 +1,13 @@
package org.mozilla.deepspeech.libdeepspeech;
public final class DeepSpeechStreamingState {
private SWIGTYPE_p_StreamingState _sp;
public DeepSpeechStreamingState(SWIGTYPE_p_StreamingState sp) {
this._sp = sp;
}
public SWIGTYPE_p_StreamingState get() {
return this._sp;
}
}

Просмотреть файл

@ -6,7 +6,7 @@
* the SWIG interface file instead.
* ----------------------------------------------------------------------------- */
package org.mozilla.voice.stt;
package org.mozilla.deepspeech.libdeepspeech;
/**
* A single transcript computed by the model, including a confidence<br>

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше