Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Type:  model Task:  speech enhancement Data Domain:  audio
Francesco 79c0c5454d
Merge pull request #11 from as-ideas/dependabot/bundler/docs/activesupport-6.0.3.1
4 years ago
9ec50b4185
Fix decoder prenet dropout assignment. Improved default schedules.
4 years ago
79c0c5454d
Merge pull request #11 from as-ideas/dependabot/bundler/docs/activesupport-6.0.3.1
4 years ago
5313eefb4b
Fix decoder dropout rate.
4 years ago
7220e7f945
Add mel clipping for WaveRNN predictions.
4 years ago
953cb1c782
PR changes.
4 years ago
76be8b85f0
Remove old test.
4 years ago
aaf591edba
Enclosing file IO, add traceback. Add docstring for alignment processing.
4 years ago
4945e775b2
Add colab.
4 years ago
b9cb1696cd
Change loading Combiner and config content (tokenizer).
4 years ago
5cc63704cd
Add license.
4 years ago
cb2fe3a5a9
Add ForwardTacotron comparison and Tboard demo.
4 years ago
aaf591edba
Enclosing file IO, add traceback. Add docstring for alignment processing.
4 years ago
aaf591edba
Enclosing file IO, add traceback. Add docstring for alignment processing.
4 years ago
4460b7ab0c
Restricted to autoregressive model. Modified config loader. Minor refactoring.
4 years ago
aaf591edba
Enclosing file IO, add traceback. Add docstring for alignment processing.
4 years ago
aaf591edba
Enclosing file IO, add traceback. Add docstring for alignment processing.
4 years ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In



A Text-to-Speech Transformer in TensorFlow 2

Implementation of a non-autoregressive Transformer based neural network for Text-to-Speech (TTS).
This repo is based on the following papers:

Spectrograms produced with LJSpeech and standard data configuration from this repo are compatible with WaveRNN.

Non-Autoregressive

Being non-autoregressive, this Transformer model is:

  • Robust: No repeats and failed attention modes for challenging sentences.
  • Fast: With no autoregression, predictions take a fraction of the time.
  • Controllable: It is possible to control the speed of the generated utterance.

🔈 Samples

Can be found here.

These samples' spectrograms are converted using the pre-trained WaveRNN vocoder.

Try it out on Colab:

Version Colab Link
Forward Open In Colab
Autoregressive Open In Colab

📖 Contents

Installation

Make sure you have:

  • Python >= 3.6

Install espeak as phonemizer backend (for macOS use brew):

sudo apt-get install espeak

Then install the rest with pip:

pip install -r requirements.txt

Read the individual scripts for more command line arguments.

Dataset

You can directly use LJSpeech to create the training dataset.

Configuration

  • If training LJSpeech, or if unsure, simply use config/standard
  • EDIT PATHS: in data_config.yaml edit the paths to point at your dataset and log folders

Custom dataset

Prepare a dataset in the following format:

|- dataset_folder/
|   |- metadata.csv
|   |- wav/
|       |- file1.wav
|       |- ...

where metadata.csv has the following format: wav_file_name|transcription

Training

Train Autoregressive Model

Create training dataset

python create_dataset.py --config config/standard

Training

python train_autoregressive.py --config config/standard

Train Forward Model

Compute alignment dataset

First use the autoregressive model to create the durations dataset

python extract_durations.py --config config/standard --binary --fix_jumps --fill_mode_next

this will add an additional folder to the dataset folder containing the new datasets for validation and training of the forward model.
If the rhythm of the trained model is off, play around with the flags of this script to fix the durations.

Training

python train_forward.py --config /path/to/config_folder/

Training & Model configuration

  • Training and model settings can be configured in model_config.yaml

Resume or restart training

  • To resume training simply use the same configuration files AND --session_name flag, if any
  • To restart training, delete the weights and/or the logs from the logs folder with the training flag --reset_dir (both) or --reset_logs, --reset_weights

Monitor training

We log some information that can be visualized with TensorBoard:

tensorboard --logdir /logs/directory/

Tensorboard Demo

Prediction

Predict with either the Forward or Autoregressive model

from utils.config_manager import ConfigManager
from utils.audio import reconstruct_waveform

config_loader = ConfigManager('/path/to/config/', model_kind='forward')
model = config_loader.load_model()
out = model.predict('Please, say something.')

# Convert spectrogram to wav (with griffin lim)
wav = reconstruct_waveform(out['mel'].numpy().T, config=config_loader.config)

Model Weights

Model URL Commit
ljspeech_forward_model 4945e775b
ljspeech_autoregressive_model_v2 4945e775b
ljspeech_autoregressive_model_v1 2f3a1b5

Maintainers

Special thanks

WaveRNN: we took the data processing from here and use their vocoder to produce the samples.
Erogol and the Mozilla TTS team for the lively exchange on the topic.

See LICENSE for details.

Tip!

Press p or to see the previous file or, n or to see the next file

About

🤖💬 Transformer TTS: Implementation of a non-autoregressive Transformer based neural network for text to speech.

Collaborators 1

Comments

Loading...