The shimmy project is a lightweight local inference server designed to run large language models with minimal overhead. Written primarily in Rust, the tool provides a small standalone binary that exposes an API compatible with the OpenAI interface, allowing existing applications to interact with local models without significant code changes. This compatibility enables developers to replace remote AI services with locally hosted models while keeping their existing software architecture intact. Shimmy focuses on performance and simplicity, using efficient runtime components to minimize memory usage and startup time compared to heavier inference frameworks. It supports modern model formats such as GGUF and SafeTensors and can automatically discover models stored locally or in common directories used by other AI tools. Advanced capabilities include CPU offloading for Mixture-of-Experts models and GPU acceleration, enabling large models to run on consumer hardware with limited VRAM.
Features
- Local inference server with OpenAI-compatible API endpoints
- Support for GGUF and SafeTensors model formats
- Single-binary deployment with minimal dependencies
- Automatic discovery of models in local directories and caches
- GPU acceleration and CPU offloading for large models
- Hot model swapping and runtime inference configuration