Vane is a privacy-focused AI-powered answering engine that combines web search, AI reasoning, and multiple language model providers into a locally controlled search experience. The platform supports both local LLMs through Ollama and cloud providers such as OpenAI, Claude, Gemini, and Groq, giving users flexibility in how queries are processed. It integrates web search through SearxNG while also supporting discussions, academic sources, image search, and video search to generate citation-backed responses. Vane includes multiple search modes optimized for speed, balanced usage, or deep research depending on the complexity of the query. Its architecture emphasizes modular orchestration, custom provider systems, streaming responses, and widget-based UI enhancements for calculations, weather, and contextual data. Designed as a local-first alternative to commercial AI search engines, the project prioritizes privacy, extensibility, and transparent source-backed answers.
Features
- Privacy-focused AI answering engine
- Support for local and cloud-based LLMs
- Citation-backed web and academic search
- Multiple search quality and speed modes
- Streaming responses with modular orchestration
- Widgets for contextual quick-look information