Implementation of Imagen, Google's Text-to-Image Neural Network that beats DALL-E2, in Pytorch. It is the new SOTA for text-to-image synthesis. Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pre-trained T5 model (attention network). It also contains dynamic clipping for improved classifier-free guidance, noise level conditioning, and a memory-efficient unit design. It appears neither CLIP nor prior network is needed after all. And so research continues. For simpler training, you can directly supply text strings instead of precomputing text encodings. (Although for scaling purposes, you will definitely want to precompute the textual embeddings + mask)
Features
- The number of textual captions must match the batch size of the images
- With the ImagenTrainer wrapper class, the exponential moving averages for all of the U-nets in the cascading DDPM will be automatically taken care of when calling update
- You can also train Imagen without text (unconditional image generation)
- You can also rely on the ImagenTrainer to automatically train off DataLoader instances
- Generate an image with any text prompt using one command line
- Imagen uses an algorithm called Classifier Free Guidance
License
MIT LicenseFollow Imagen - Pytorch
Other Useful Business Software
Our Free Plans just got better! | Auth0
You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Imagen - Pytorch!