Implementation of Denoising Diffusion Probabilistic Model in Pytorch. It is a new approach to generative modeling that may have the potential to rival GANs. It uses denoising score matching to estimate the gradient of the data distribution, followed by Langevin sampling to sample from the true distribution. If you simply want to pass in a folder name and the desired image dimensions, you can use the Trainer class to easily train a model.
Features
- Annotated code by Research Scientists
- This implementation was transcribed from the official Tensorflow version
- Samples and model checkpoints will be logged to ./results periodically
- The Trainer class is now equipped with Accelerator
- You can easily do multi-gpu training in two steps
- A new approach to generative modeling
Categories
Machine LearningLicense
MIT LicenseFollow Denoising Diffusion Probabilistic Model
Other Useful Business Software
MongoDB Atlas runs apps anywhere
MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Denoising Diffusion Probabilistic Model!