evolution-strategies-starter is an archived OpenAI research project that provides a distributed implementation of the algorithm described in the paper “Evolution Strategies as a Scalable Alternative to Reinforcement Learning” by Tim Salimans, Jonathan Ho, Xi Chen, and Ilya Sutskever. The repository demonstrates how to scale Evolution Strategies (ES) for reinforcement learning tasks using a master-worker architecture, where the master node broadcasts parameters to multiple workers, and the workers return performance results after evaluation. This approach allows for efficient parallelization and robustness against worker termination, making it ideal for distributed execution on Amazon EC2 spot instances. The codebase supports building custom AMIs with Packer, integrates with MuJoCo for simulation-based experiments, and includes scripts for launching and managing large-scale runs. While no longer actively maintained, the repository serves as a historical and educational reference.
Features
- Distributed implementation of OpenAI’s Evolution Strategies algorithm
- Master-worker architecture for scalable parallel computation
- Designed for deployment on Amazon EC2 spot instances
- Includes setup scripts for Packer-based AMI creation and configuration
- Supports MuJoCo environments for physics-based RL experiments
- Reproducible example configurations for humanoid scaling experiments