DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
Перейти к файлу
Alexandre Lissy 1939f74ec0 Store graph version on TFLite 2019-10-16 15:36:45 +02:00
.github Add lock bot config 2018-12-28 19:37:01 -02:00
bin Setted default param of skiplist to '' 2019-10-11 16:42:32 +02:00
data Move from Markdown to reStructuredText 2019-10-04 12:07:32 +02:00
doc Add missing doc for new API 2019-10-12 17:04:12 +02:00
examples Bump VERSION to 0.6.0-alpha.9 2019-10-11 12:56:22 +02:00
images Compressed gif 2017-11-28 17:16:03 +01:00
native_client Store graph version on TFLite 2019-10-16 15:36:45 +02:00
taskcluster Store graph version on TFLite 2019-10-16 15:36:45 +02:00
util Merge PR #2434 - Add flag for automatic mixed precision training 2019-10-15 13:45:19 +02:00
.cardboardlint.yml Update cardboardlint configuration 2019-10-04 13:56:41 +02:00
.compute Bump references to TF 1.13.1 to TF 1.14.0 2019-07-08 18:56:59 +02:00
.gitattributes Remove old versions of decoder binary files 2018-11-08 18:35:42 -02:00
.gitignore Sphinx doc 2019-09-24 18:22:45 +02:00
.pylintrc Add pylint CI 2019-04-10 21:21:26 -03:00
.readthedocs.yml Re-enable readthedocs.io 2019-09-24 10:55:26 +02:00
.taskcluster.yml Leverage TaskCluster-level caching for Homebrew and Python on macOS 2019-07-16 14:14:06 +02:00
.travis.yml Add pylint CI 2019-04-10 21:21:26 -03:00
CODE_OF_CONDUCT.md Add Mozilla Code of Conduct file 2019-03-29 14:58:39 -07:00
CONTRIBUTING.rst Move from Markdown to reStructuredText 2019-10-04 12:07:32 +02:00
DeepSpeech.py Store graph version on TFLite 2019-10-16 15:36:45 +02:00
Dockerfile Statically link libsox 2019-08-21 21:35:08 +02:00
GRAPH_VERSION Store graph version on TFLite 2019-10-16 15:36:45 +02:00
ISSUE_TEMPLATE.md Create an issue template 2017-11-27 16:40:59 -02:00
LICENSE Added LICENSE 2016-09-20 19:12:29 +02:00
README.rst Fix broken references to README.md 2019-10-10 22:04:31 +02:00
RELEASE.rst Move from Markdown to reStructuredText 2019-10-04 12:07:32 +02:00
SUPPORT.rst Move from Markdown to reStructuredText 2019-10-04 12:07:32 +02:00
TRAINING.rst Fix broken references to README.md 2019-10-10 22:04:31 +02:00
USING.rst Fix broken references to README.md 2019-10-10 22:04:31 +02:00
VERSION Bump VERSION to 0.6.0-alpha.9 2019-10-11 12:56:22 +02:00
bazel.patch Proper re-use of Bazel cache 2018-01-31 18:50:36 +01:00
build-python-wheel.yml-DISABLED_ENABLE_ME_TO_REBUILD_DURING_PR Move to ARMbian Buster 2019-08-21 22:58:10 +02:00
evaluate.py Make language model scoring optional in Python inference code 2019-09-30 11:43:00 +02:00
evaluate_tflite.py Remove sample rate parameter usage from evaluate_tflite.py 2019-10-10 15:44:11 +02:00
requirements.txt Switch from deprecated tfv1.app to absl-py 2019-08-28 10:55:33 +02:00
requirements_eval_tflite.txt Add TFLite accuracy estimation tool 2019-02-12 13:03:20 +01:00
stats.py Computing audio hours at import 2019-05-28 16:46:20 +02:00

README.rst

Project DeepSpeech
==================


.. image:: https://readthedocs.org/projects/deepspeech/badge/?version=latest
   :target: http://deepspeech.readthedocs.io/?badge=latest
   :alt: Documentation


.. image:: https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/badge.svg
   :target: https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/latest
   :alt: Task Status


DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.

To install and use deepspeech all you have to do is:

.. code-block:: bash

   # Create and activate a virtualenv
   virtualenv -p python3 $HOME/tmp/deepspeech-venv/
   source $HOME/tmp/deepspeech-venv/bin/activate

   # Install DeepSpeech
   pip3 install deepspeech

   # Download pre-trained English model and extract
   curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz
   tar xvf deepspeech-0.5.1-models.tar.gz

   # Download example audio files
   curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/audio-0.5.1.tar.gz
   tar xvf audio-0.5.1.tar.gz

   # Transcribe an audio file
   deepspeech --model deepspeech-0.5.1-models/output_graph.pbmm --alphabet deepspeech-0.5.1-models/alphabet.txt --lm deepspeech-0.5.1-models/lm.binary --trie deepspeech-0.5.1-models/trie --audio audio/2830-3980-0043.wav

A pre-trained English model is available for use and can be downloaded using `the instructions below <USING.rst#using-a-pre-trained-model>`_. Currently, only 16-bit, 16 kHz, mono-channel WAVE audio files are supported in the Python client. A package with some example audio files is available for download in our `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_.

Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_ to find which GPUs are supported. To run ``deepspeech`` on a GPU, install the GPU specific package:

.. code-block:: bash

   # Create and activate a virtualenv
   virtualenv -p python3 $HOME/tmp/deepspeech-gpu-venv/
   source $HOME/tmp/deepspeech-gpu-venv/bin/activate

   # Install DeepSpeech CUDA enabled package
   pip3 install deepspeech-gpu

   # Transcribe an audio file.
   deepspeech --model deepspeech-0.5.1-models/output_graph.pbmm --alphabet deepspeech-0.5.1-models/alphabet.txt --lm deepspeech-0.5.1-models/lm.binary --trie deepspeech-0.5.1-models/trie --audio audio/2830-3980-0043.wav

Please ensure you have the required `CUDA dependencies <USING.rst#cuda-dependency>`_.

See the output of ``deepspeech -h`` for more information on the use of ``deepspeech``. (If you experience problems running ``deepspeech``\ , please check `required runtime dependencies <native_client/README.rst#required-dependencies>`_\ ).

----

**Table of Contents**


* `Using a Pre-trained Model <USING.rst#using-a-pre-trained-model>`_

  * `CUDA dependency <USING.rst#cuda-dependency>`_
  * `Getting the pre-trained model <USING.rst#getting-the-pre-trained-model>`_
  * `Model compatibility <USING.rst#model-compatibility>`_
  * `Using the Python package <USING.rst#using-the-python-package>`_
  * `Using the Node.JS package <USING.rst#using-the-nodejs-package>`_
  * `Using the Command Line client <USING.rst#using-the-command-line-client>`_
  * `Installing bindings from source <USING.rst#installing-bindings-from-source>`_
  * `Third party bindings <USING.rst#third-party-bindings>`_

* `Training your own Model <TRAINING.rst#training-your-own-model>`_

  * `Prerequisites for training a model <TRAINING.rst#prerequisites-for-training-a-model>`_
  * `Getting the training code <TRAINING.rst#getting-the-training-code>`_
  * `Installing Python dependencies <TRAINING.rst#installing-python-dependencies>`_
  * `Recommendations <TRAINING.rst#recommendations>`_
  * `Common Voice training data <TRAINING.rst#common-voice-training-data>`_
  * `Training a model <TRAINING.rst#training-a-model>`_
  * `Checkpointing <TRAINING.rst#checkpointing>`_
  * `Exporting a model for inference <TRAINING.rst#exporting-a-model-for-inference>`_
  * `Exporting a model for TFLite <TRAINING.rst#exporting-a-model-for-tflite>`_
  * `Making a mmap-able model for inference <TRAINING.rst#making-a-mmap-able-model-for-inference>`_
  * `Continuing training from a release model <TRAINING.rst#continuing-training-from-a-release-model>`_
  * `Training with Augmentation <TRAINING.rst#training-with-augmentation>`_

* `Contribution guidelines <CONTRIBUTING.rst>`_
* `Contact/Getting Help <SUPPORT.rst>`_