This repository collects reference implementations and illustrative code accompanying a wide range of DeepMind publications, making it easier for the research community to reproduce results, inspect algorithms, and build on prior work. The top level organizes many paper-specific directories across domains such as deep reinforcement learning, self-supervised vision, generative modeling, scientific ML, and program synthesis—for example BYOL, Perceiver/Perceiver IO, Enformer for genomics, MeshGraphNets for physics, RL Unplugged, Nowcasting for weather, and more. Each project folder typically includes its own README, scripts, and notebooks so you can run experiments or explore models in isolation, and many link to associated datasets or external environments like DeepMind Lab and StarCraft II. The codebase is primarily Jupyter Notebooks and Python, reflecting an emphasis on experimentation and pedagogy rather than production packaging.
Features
- Paper-aligned reference implementations for diverse AI subfields
- Project-scoped READMEs, scripts, and notebooks for quick reproduction
- Links to datasets and research environments used in the papers
- Emphasis on educational, exploratory code over production frameworks
- Apache-2.0 licensing for broad reuse in research and education
- Active issues and contributions across many paper directories