multi-agent-emergence-environments is an open source research environment framework developed by OpenAI for the study of emergent behaviors in multi-agent systems. It was designed for the experiments described in the paper and blog post “Emergent Tool Use from Multi-Agent Autocurricula”, which investigated how complex cooperative and competitive behaviors can evolve through self-play. The repository provides environment generation code that builds on the mujoco-worldgen package, enabling dynamic creation of simulated physical environments. Developers can construct custom environments by combining modular components such as Boxes, Ramps, and RandomWalls using a flexible layering approach that reduces code duplication. The framework includes several predefined environments—such as Hide and Seek, Box Locking, Blueprint Construction, and Shelter Construction—that model distinct problem-solving and collaboration scenarios.
Features
- Implements environments from “Emergent Tool Use from Multi-Agent Autocurricula”
- Built on top of mujoco-worldgen for flexible environment generation
- Modular design using EnvModules and wrappers for easy extensibility
- Includes multiple complex multi-agent environments (e.g., Hide and Seek, Box Locking)
- Supports visualization and testing of policies via bin/examine tools
- Designed for reproducible research in emergent AI and reinforcement learning