vJEPA-2
PyTorch code and models for VJEPA2 self-supervised learning from video
...Instead of reconstructing pixels, it predicts the missing high-level embeddings of masked space-time regions using a context encoder and a slowly updated target encoder. This objective encourages the model to learn semantics, motion, and long-range structure without the shortcuts that pixel-level losses can invite. The architecture is designed to scale: spatiotemporal ViT backbones, flexible masking schedules, and efficient sampling let it train on long clips while remaining stable. Trained representations transfer well to downstream tasks such as action recognition, temporal localization, and video retrieval, often with simple linear probes or light fine-tuning. ...