PyTorch-BigGraph (PBG) is a system for learning embeddings on massive graphs—think billions of nodes and edges—using partitioning and distributed training to keep memory and compute tractable. It shards entities into partitions and buckets edges so that each training pass only touches a small slice of parameters, which drastically reduces peak RAM and enables horizontal scaling across machines. PBG supports multi-relation graphs (knowledge graphs) with relation-specific scoring functions, negative sampling strategies, and typed entities, making it suitable for link prediction and retrieval. Its training loop is built for throughput: asynchronous I/O, memory-mapped tensors, and lock-free updates keep GPUs and CPUs fed even at extreme scale. The toolkit includes evaluation metrics and export tools so learned embeddings can be used in downstream nearest-neighbor search, recommendation, or analytics. In practice, PBG’s design lets practitioners train high-quality graph embeddings.
Features
- Partitioned training for billion-scale graphs
- Multi-relation scoring for knowledge graph link prediction
- Efficient negative sampling and edge bucketing
- Export and evaluation utilities for ANN and downstream tasks
- Asynchronous I/O with memory-mapped tensors
- Distributed, multi-machine training with simple orchestration