This is an implementation of the GANformer model, a novel and efficient type of transformer, explored for the task of image generation. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. The model iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation and can thus be seen as a generalization of the successful StyleGAN network. Using the pre-trained models (generated after training for 5-7x less steps than StyleGAN2 models! Training our models for longer will improve the image quality further).
Features
- Image sampling and visualization script
- Code clean-up and refacotiring, adding documentation
- Training and data-prepreation intructions
- Pretrained networks for all datasets
- Extra visualizations and evaluations
- Providing models trained for longer
- We now support both PyTorch and TF