TransformerTTS is an implementation of a non-autoregressive Transformer-based neural network for text-to-speech, built with TensorFlow 2. It takes inspiration from architectures like FastSpeech, FastSpeech 2, FastPitch, and Transformer TTS, and extends them with its own aligner and forward models. The system separates alignment learning and acoustic modeling: an autoregressive Transformer is used as an aligner to extract phoneme-to-frame durations, while a non-autoregressive “ForwardTransformer” generates mel-spectrograms conditioned on text and durations. This design addresses common autoregressive issues such as repetition, skipped words, and unstable attention, and results in robust, fast synthesis where all frames are predicted in parallel. The repository ships with tooling to build datasets (especially LJSpeech) and create training data, plus scripts to train both the aligner and the TTS model, monitor training with TensorBoard, and resume or reset training runs.
Features
- Non-autoregressive Transformer TTS model that predicts spectrogram frames in parallel for fast synthesis
- Separate aligner and forward models for robust duration modeling and improved stability
- Pre-trained LJSpeech checkpoints compatible with MelGAN, HiFi-GAN, and Griffin-Lim reconstruction
- TensorFlow 2 implementation with scripts for dataset creation, training, prediction, and monitoring via TensorBoard
- Configurable training pipeline using YAML configs for data paths, model hyperparameters, and vocoder compatibility
- Support for custom datasets through pluggable metadata readers and training-data preparation scripts