ReCall is an open-source framework designed to train and evaluate language models that can reason through complex problems by interacting with external tools. The project builds on earlier work focused on teaching models how to search for information during reasoning tasks and extends that idea to a broader system where models can call a variety of external tools such as APIs, databases, or computation engines. Instead of relying purely on static knowledge stored inside the model, ReCall allows the language model to dynamically decide when it should retrieve information or invoke external capabilities during the reasoning process. The framework uses reinforcement learning to train models to perform these tool calls effectively while solving multi-step reasoning tasks.
Features
- Reinforcement learning framework for training tool-using language models
- Support for dynamic tool calling including APIs, databases, and computation engines
- Multi-step reasoning workflows that combine thinking, acting, and observing results
- Research environment for training agentic AI systems capable of complex tasks
- Benchmarks and experiments focused on reasoning and tool-augmented inference
- Extensible architecture for integrating custom tools and external services