Ollama Copilot is a proxy-based tool that transforms locally hosted language models into a GitHub Copilot-style coding assistant for popular development environments. It acts as an intermediary server that exposes Ollama or other model providers through a Copilot-compatible interface, allowing developers to use local or self-hosted models for inline code completion. The project supports multiple providers such as Ollama, DeepSeek, and Mistral, enabling flexibility between local and remote inference depending on user needs. It integrates with editors like Neovim, VS Code, Zed, and Emacs by redirecting Copilot traffic through a configurable proxy layer. The system allows customization of parameters such as context size, token prediction limits, and prompt templates, which gives developers granular control over how completions are generated. It also supports secure connections through TLS configuration and can be deployed as a background service for continuous availability.
Features
- Copilot-style proxy for local or remote LLMs
- Integration with multiple IDEs and editors
- Support for multiple providers including Ollama and DeepSeek
- Customizable prompt templates and system prompts
- Configurable context window and token limits
- Optional background service with TLS support