зеркало из https://github.com/mozilla/TTS.git
README update
This commit is contained in:
Родитель
26540d507d
Коммит
32d21545ac
21
README.md
21
README.md
|
@ -36,8 +36,10 @@ Please use our dedicated channels for questions and discussion. Help is much mor
|
|||
| Type | Links |
|
||||
| ------------------------------- | --------------------------------------- |
|
||||
| 👩🏾🏫 **Tutorials and Examples** | [TTS/Wiki](https://github.com/mozilla/TTS/wiki/TTS-Notebooks-and-Tutorials) |
|
||||
| 🤖 **Released Models** | [TTS/Wiki](https://github.com/mozilla/TTS/wiki/Released-Models)|
|
||||
| 🚀 **Released Models** | [TTS/Wiki](https://github.com/mozilla/TTS/wiki/Released-Models)|
|
||||
| 💻 **Docker Image** | [Repository by @synesthesiam](https://github.com/synesthesiam/docker-mozillatts)|
|
||||
| 🖥️ **Demo Server** | [TTS/server](https://github.com/mozilla/TTS/tree/master/TTS/server)|
|
||||
| 🤖 **Running TTS on Terminal** | [TTS/README.md](https://github.com/mozilla/TTS#example-synthesizing-speech-on-terminal-using-the-released-models)|
|
||||
|
||||
## 🥇 TTS Performance
|
||||
<p align="center"><img src="https://discourse-prod-uploads-81679984178418.s3.dualstack.us-west-2.amazonaws.com/optimized/3X/6/4/6428f980e9ec751c248e591460895f7881aec0c6_2_1035x591.png" width="800" /></p>
|
||||
|
@ -137,6 +139,23 @@ Some of the public datasets that we successfully applied TTS:
|
|||
- [LibriTTS](https://openslr.org/60/)
|
||||
- [Spanish](https://drive.google.com/file/d/1Sm_zyBo67XHkiFhcRSQ4YaHPYM0slO_e/view?usp=sharing) - thx! @carlfm01
|
||||
|
||||
## Example: Synthesizing Speech on Terminal Using the Released Models.
|
||||
|
||||
TTS provides a CLI interface for synthesizing speech using pre-trained models. You can either use your own model or the release models under the TTS project.
|
||||
|
||||
Listing released TTS models.
|
||||
```./TTS/bin/synthesize.py --list_models```
|
||||
|
||||
Run a tts and a vocoder model from the released model list. (Simply copy and paste the full model names from the list as arguments for the command below.)
|
||||
```./TTS/bin/synthesize.py --text "Text for TTS" --model_name "<type>/<language>/<dataset>/<model_name>" --vocoder_name "<type>/<language>/<dataset>/<model_name>" --output_path```
|
||||
|
||||
Run your own TTS model (Using Griffin-Lim Vocoder)
|
||||
```./TTS/bin/synthesize.py --text "Text for TTS" --model_path path/to/model.pth.tar --config_path path/to/config.json --out_path output/path/speech.wav```
|
||||
|
||||
Run your own TTS and Vocoder models
|
||||
```./TTS/bin/synthesize.py --text "Text for TTS" --model_path path/to/config.json --config_path path/to/model.pth.tar --out_path output/path/speech.wav --vocoder_path path/to/vocoder.pth.tar --vocoder_config_path path/to/vocoder_config.json```
|
||||
|
||||
|
||||
## Example: Training and Fine-tuning LJ-Speech Dataset
|
||||
Here you can find a [CoLab](https://gist.github.com/erogol/97516ad65b44dbddb8cd694953187c5b) notebook for a hands-on example, training LJSpeech. Or you can manually follow the guideline below.
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче