πŸ€– πŸ’¬ Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)
ΠŸΠ΅Ρ€Π΅ΠΉΡ‚ΠΈ ΠΊ Ρ„Π°ΠΉΠ»Ρƒ
Eren Golge a47b65e014 Update README.md 2018-01-26 12:05:24 +01:00
datasets Audio Precessing class, passing data fetching argummetns from config 2018-01-24 08:04:25 -08:00
layers New files 2018-01-22 06:59:41 -08:00
models Audio Precessing class, passing data fetching argummetns from config 2018-01-24 08:04:25 -08:00
png Beginning 2018-01-22 01:48:59 -08:00
utils Checkpoint fix 2018-01-26 02:07:07 -08:00
.gitignore new files 2018-01-22 06:59:21 -08:00
README.md Update README.md 2018-01-26 12:05:24 +01:00
__init__.py Beginning 2018-01-22 01:48:59 -08:00
config.json Better config for training 2018-01-26 02:53:01 -08:00
debug_config.py debug config 2018-01-26 02:07:21 -08:00
module.py Beginning 2018-01-22 01:48:59 -08:00
requirements.txt Requirements 2018-01-24 10:49:24 -08:00
synthesis.py Beginning 2018-01-22 01:48:59 -08:00
train.py Better config for training 2018-01-26 02:53:01 -08:00

README.md

TTS (Work in Progress...)

Here we have pytorch implementation of:

At the end, it should be easy to add new models and try different architectures.

You can find here a brief note about possible TTS architectures and their comparisons.

Requirements

Highly recommended to use miniconda for easier installation.

  • python 3.6
  • pytorch > 0.2.0
  • TODO

Data

Currently TTS provides data loaders for

Training the network

To run your own training, you need to define a config.json file (simple template below) and call the following command.

train.py --config_path config.json

If you like to use specific GPUs.

CUDA_VISIBLE_DEVICES="0,1,4" train.py --config_path config.json

Each run creates a experiment folder with the corresponfing date and time, under the folder you set in config.json. And if there is no checkpoint yet under that folder, it is going to be removed when you Ctrl+C.

Example config.json:

{
  // Data loading parameters
  "num_mels": 80,
  "num_freq": 1024,
  "sample_rate": 20000,
  "frame_length_ms": 50.0,
  "frame_shift_ms": 12.5,
  "preemphasis": 0.97,
  "min_level_db": -100,
  "ref_level_db": 20,
  "hidden_size": 128,
  "embedding_size": 256,
  "text_cleaner": "english_cleaners",

  // Training parameters
  "epochs": 2000,
  "lr": 0.001,
  "lr_patience": 2,  // lr_scheduler.ReduceLROnPlateau().patience
  "lr_decay": 0.5,   // lr_scheduler.ReduceLROnPlateau().factor
  "batch_size": 256,
  "griffinf_lim_iters": 60,
  "power": 1.5,
  "r": 5,            // number of decoder outputs for Tacotron

  // Number of data loader processes
  "num_loader_workers": 8,

  // Experiment logging parameters
  "save_step": 200,
  "data_path": "/path/to/KeithIto/LJSpeech-1.0",
  "output_path": "/path/to/my_experiment",
  "log_dir": "/path/to/my/tensorboard/logs/"
}