08329656cc
Because GitHub seems to be using 302 redirects to serve files off of some sort of CDN, and plain `curl` fails |
||
---|---|---|
images | ||
.gitignore | ||
ALPHABET.md | ||
AM_vs_LM.md | ||
CONTINUOUS_INTEGRATION.md | ||
DATA_FORMATTING.md | ||
DEEPSPEECH.md | ||
DEPLOYMENT.md | ||
ENVIRONMENT.md | ||
EXAMPLES.md | ||
INTRO.md | ||
LICENSE.md | ||
README.md | ||
SCORER.md | ||
TESTING.md | ||
TRAINING.md |
README.md
DeepSpeech Playbook
A crash course on training speech recognition models using DeepSpeech.
Quick links
- DeepSpeech on GitHub
- DeepSpeech documentation on ReadTheDocs
- DeepSpeech discussions on Mozilla's Discourse forum
- Common Voice Datasets
- How to install Docker
Introduction
Start here. This section will set your expectations for what you can achieve with the DeepSpeech Playbook, and the prerequisites you'll need to start to train your own speech recognition models.
About DeepSpeech
Once you know what you can achieve with the DeepSpeech Playbook, this section provides an overview of DeepSpeech itself, its component parts, and how it differs from other speech recognition engines you may have used in the past.
Formatting your training data
Before you can train a model, you will need to collect and format your corpus of data. This section provides an overview of the data format required for DeepSpeech, and walks through an example in prepping a dataset from Common Voice.
The alphabet.txt file
If you are training a model that uses a different alphabet to English, for example a language with diacritical marks, then you will need to modify the alphabet.txt
file.
Building your own scorer
Learn what the scorer does, and how you can go about building your own.
Acoustic model and language model
Learn about the differences between DeepSpeech's acoustic model and language model and how they combine to provide end to end speech recognition.
Setting up your training environment
This section walks you through building a Docker image, and spawning DeepSpeech in a Docker container with persistent storage. This approach avoids the complexities of dependencies such as tensorflow
.
Training a model
Once you have your training data formatted, and your training environment established, this section will show you how to train a model, and provide guidance for overcoming common pitfalls.
Testing a model
Once you've trained a model, you will need to validate that it works for the context it's been designed for. This section walks you through this process.
Deploying your model
Once trained and tested, your model is deployed. This section provides an overview of how you can deploy your model.
Applying DeepSpeech to real world problems
This section covers specific use cases where DeepSpeech can be applied to real world problems, such as transcription, keyword searching and voice controlled applications.
Setting up Continuous Integration
Learn how to set up Continuous Integration (CI) for your own fork of DeepSpeech. Intended for developers who are utilising DeepSpeech for their own specific use cases.
Introductory courses on machine learning
Providing an introduction to machine learning is beyond the scope of this PlayBook, howevever having an understanding of machine learning and deep learning concepts will aid your efforts in training speech recognition models with DeepSpeech.
Here, we've linked to several resources that you may find helpful; they're listed in the order we recommend reading them in.
-
Digital Ocean's introductory machine learning tutorial provides an overview of different types of machine learning. The diagrams in this tutorial are a great way of explaining key concepts.
-
Google's machine learning crash course provides a gentle introduction to the main concepts of machine learning, including gradient descent, learning rate, training, test and validation sets and overfitting.
-
If machine learning is something that sparks your interest, then you may enjoy the MIT Open Learning Library's Introduction to Machine Learning course, a 13-week college-level course covering perceptrons, neural networks, support vector machines and convolutional neural networks.
How you can help provide feedback on the DeepSpeech PlayBook
You can help to make the DeepSpeech PlayBook even better by providing via a GitHub Issue
-
Please try these instructions, particularly for building a Docker image and running a Docker container, on multiple distributions of Linux so that we can identify corner cases.
-
Please contribute your tacit knowledge - such as:
- common errors encountered in data formatting, environment setup, training and validation
- techniques or approaches for improving the scorer, alphabet file or the accuracy of Word Error Rate (WER) and Character Error Rate (CER).
- case studies of the work you or your organisation have been doing, showing your approaches to data validation, training or evaluation.
-
Please identify errors in text - with many eyes, bugs are shallow :-)