MoCo is an open source PyTorch implementation developed by Facebook AI Research (FAIR) for the papers “Momentum Contrast for Unsupervised Visual Representation Learning” (He et al., 2019) and “Improved Baselines with Momentum Contrastive Learning” (Chen et al., 2020). It introduces Momentum Contrast (MoCo), a scalable approach to self-supervised learning that enables visual representation learning without labeled data. The core idea of MoCo is to maintain a dynamic dictionary with a momentum-updated encoder, allowing efficient contrastive learning across large batches. The repository includes implementations for both MoCo v1 and MoCo v2, the latter improving training stability and performance through architectural and augmentation enhancements. Training is optimized for distributed multi-GPU environments, using DistributedDataParallel for speed and simplicity.
Features
- PyTorch implementation of MoCo v1 and v2 for unsupervised learning
- Momentum encoder mechanism for scalable contrastive representation learning
- Supports distributed multi-GPU training via DistributedDataParallel
- Pre-trained ResNet-50 models available for evaluation and transfer learning
- Includes linear evaluation and object detection transfer examples
- Minimal modification from official PyTorch ImageNet code for easy integration