| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| Initial Release source code.tar.gz | 2026-01-12 | 5.1 MB | |
| Initial Release source code.zip | 2026-01-12 | 5.2 MB | |
| README.md | 2026-01-12 | 502 Bytes | |
| Totals: 3 Items | 10.3 MB | 0 | |
Initial Release
Working implementation of RLMs that supports local, Docker, Modal, and Prime Intellect sandboxes.
* Mainly communicates between lm_handler and the sandbox through either HTTP or sockets.
* Supports the sub-call llm_query and batch_llm_query for multiple asynchronous subclass.
* Supports persistent REPLs for RLM clients, meaning the REPL environment does not reset after every RLM call.
* Supports most major client providers (mainly tested with OpenAI completions SDK).