PixelCNN is the official implementation from OpenAI of the autoregressive generative model described in the paper Conditional Image Generation with PixelCNN Decoders. It provides code for training and evaluating PixelCNN models on image datasets, focusing on conditional image modeling where pixels are generated sequentially based on the values of previously generated pixels. The repository demonstrates how to apply masked convolutions to enforce autoregressive dependencies and achieve tractable likelihood-based training. It also includes scripts for reproducing key experimental results from the paper, such as conditional sampling on datasets like CIFAR-10. The project serves as both a research reference and a practical framework for experimenting with autoregressive generative models. Although archived, PixelCNN has influenced a wide range of later work in generative modeling, including advancements in image transformers and diffusion models.
Features
- Official reference implementation of the PixelCNN model
- Supports conditional image generation with autoregressive decoding
- Uses masked convolutions to maintain causal dependencies
- Training and evaluation scripts for reproducibility
- Example experiments on standard image datasets like CIFAR-10
- Provides a foundation for studying likelihood-based generative models