CycleGAN and pix2pix in PyTorch repository is a PyTorch implementation of two influential image-to-image translation frameworks: CycleGAN (for unpaired translation) and pix2pix (for paired translation). This repo gives developers and researchers a convenient, modern (PyTorch-based) platform to train and test these methods — supporting both paired datasets (input to output) and unpaired datasets (domain-to-domain) with minimal changes. The code supports standard training and inference pipelines, and as of recent updates, compatibility with the latest Python and PyTorch versions (e.g. Python 3.11, PyTorch 2.4) as well as support for distributed/multi-GPU training for scalable workflows. Because of its flexibility, users can apply it to many tasks: e.g. style transfer between domains (e.g. season changes, art-to-photo, etc.), mapping sketches/edges to real images, image colorization, day-to-night, photo enhancement, and more.
Features
- Supports both paired (pix2pix) and unpaired (CycleGAN) image-to-image translation
- PyTorch + CUDA + GPU/multi-GPU (DDP) support for scalable training and inference
- Templates and dataset-structure guidance for custom datasets (trainA/B, testA/B, etc.)
- Configurable training — models, hyperparameters, directions, batch sizes, scheduling, etc.
- Works for a wide variety of tasks: style transfer, colorization, domain transfer, maps ↔ photos, edges ↔ images, etc.
- Pretrained models available for common translations (e.g. horse2zebra, edges2shoes) for quick testing