Are you sure you want to delete this access key?
A Text-to-Speech Transformer in TensorFlow 2
Implementation of a non-autoregressive Transformer based neural network for Text-to-Speech (TTS).
This repo is based on the following papers:
Spectrograms produced with LJSpeech and standard data configuration from this repo are compatible with WaveRNN.
Being non-autoregressive, this Transformer model is:
These samples' spectrograms are converted using the pre-trained WaveRNN vocoder.
Try it out on Colab:
Version | Colab Link |
---|---|
Forward | |
Autoregressive |
Make sure you have:
Install espeak as phonemizer backend (for macOS use brew):
sudo apt-get install espeak
Then install the rest with pip:
pip install -r requirements.txt
Read the individual scripts for more command line arguments.
You can directly use LJSpeech to create the training dataset.
config/standard
data_config.yaml
edit the paths to point at your dataset and log foldersPrepare a dataset in the following format:
|- dataset_folder/
| |- metadata.csv
| |- wav/
| |- file1.wav
| |- ...
where metadata.csv
has the following format:
wav_file_name|transcription
python create_dataset.py --config config/standard
python train_autoregressive.py --config config/standard
First use the autoregressive model to create the durations dataset
python extract_durations.py --config config/standard --binary --fix_jumps --fill_mode_next
this will add an additional folder to the dataset folder containing the new datasets for validation and training of the forward model.
If the rhythm of the trained model is off, play around with the flags of this script to fix the durations.
python train_forward.py --config /path/to/config_folder/
model_config.yaml
--session_name
flag, if any--reset_dir
(both) or --reset_logs
, --reset_weights
We log some information that can be visualized with TensorBoard:
tensorboard --logdir /logs/directory/
Predict with either the Forward or Autoregressive model
from utils.config_manager import ConfigManager
from utils.audio import reconstruct_waveform
config_loader = ConfigManager('/path/to/config/', model_kind='forward')
model = config_loader.load_model()
out = model.predict('Please, say something.')
# Convert spectrogram to wav (with griffin lim)
wav = reconstruct_waveform(out['mel'].numpy().T, config=config_loader.config)
Model URL | Commit |
---|---|
ljspeech_forward_model | 4945e775b |
ljspeech_autoregressive_model_v2 | 4945e775b |
ljspeech_autoregressive_model_v1 | 2f3a1b5 |
WaveRNN: we took the data processing from here and use their vocoder to produce the samples.
Erogol and the Mozilla TTS team for the lively exchange on the topic.
See LICENSE for details.
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?