slime is an open-source large language model (LLM) post-training framework developed to support reinforcement learning (RL)-based scaling and high-performance training workflows for advanced LLMs, blending training and rollout modules into an extensible system. It offers a flexible architecture that connects high-throughput training (e.g., via Megatron-LM) with a customizable data generation pipeline, enabling researchers and engineers to iterate on new RL training paradigms effectively. The framework is designed to support a wide range of training modes, allowing both synchronous and asynchronous RL workflows and programmable rollout interfaces that simplify experimentation with custom environments and reward signals. Because it integrates tightly with SGLang and other training engines, slime can improve scalability and efficiency while providing maintainability and adaptability for developing new models and training algorithms.
Features
- LLM post-training framework for reinforcement learning scaling
- High-performance training integration with Megatron-LM
- Customizable rollout and data generation workflows
- Support for synchronous and asynchronous RL modes
- Extensible architecture with plugins and examples
- Documentation and guides for scalable model development