Showing 19 open source projects for "lm studio rag"

View related business solutions
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • Go from Code to Production URL in Seconds Icon
    Go from Code to Production URL in Seconds

    Cloud Run deploys apps in any language instantly. Scales to zero. Pay only when code runs.

    Skip the Kubernetes configs. Cloud Run handles HTTPS, scaling, and infrastructure automatically. Two million requests free per month.
    Try it free
  • 1
    LM Studio CLI (lms)

    LM Studio CLI (lms)

    LM Studio CLI

    LM Studio CLI (lms) is a command-line tool designed to help developers manage and interact with LM Studio directly from the terminal. Built with lmstudio.js, it provides a streamlined interface for controlling local language models and API server operations. The CLI ships with LM Studio 0.2.22 and newer, making it easy to manage models without additional complex setup.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    Coze Studio

    Coze Studio

    An AI agent development platform with all-in-one visual tools

    Coze Studio is ByteDance’s open‑source, visual AI agent development platform. It offers no-code/low-code workflows to build, debug, and deploy conversational agents, integrating prompting, RAG-based knowledge bases, plugin systems, and workflow orchestration. Developed in Go (backend) and React/TypeScript (frontend), it uses a containerized microservices architecture suitable for enterprise deployment.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    lmstudio.js

    lmstudio.js

    LM Studio TypeScript SDK

    lmstudio.js is the official TypeScript and JavaScript SDK that enables developers to programmatically interact with LM Studio’s local AI runtime. The library exposes the same capabilities used internally by the LM Studio desktop application, allowing external apps to load models, run inference, and build autonomous AI workflows. It is designed to simplify the creation of local AI tools by handling complex concerns such as dependency management, hardware compatibility, and model configuration. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    MLX Engine

    MLX Engine

    LM Studio Apple MLX engine

    MLX Engine is the Apple MLX-based inference backend used by LM Studio to run large language models efficiently on Apple Silicon hardware. Built on top of the mlx-lm and mlx-vlm ecosystems, the engine provides a unified architecture capable of supporting both text-only and multimodal models. Its design focuses on high-performance on-device inference, leveraging Apple’s MLX stack to accelerate computation on M-series chips.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    lms

    lms

    LM Studio CLI

    ...LMS is built using the LM Studio JavaScript SDK and integrates tightly with the LM Studio runtime environment. The interface is designed to simplify automation workflows and scripting tasks related to local AI deployment. By exposing model management capabilities through command-line commands, the tool enables developers to integrate local LLM operations into development pipelines and backend services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Live Agent Studio

    Live Agent Studio

    Open source AI Agents hosted on the oTTomator Live Agent Studio

    Live Agent Studio is a curated repository of open-source AI agents associated with the oTTomator Live Agent Studio platform, showcasing a variety of agent implementations that illustrate how autonomous and semi-autonomous tools can be constructed using modern AI frameworks. Each agent in the collection is designed for a specific use case — such as content summarization, task automation, travel planning, or RAG workflows — and is provided with the code or configuration needed to explore and extend it on your own, making the repository both a learning resource and a practical starting point for real projects. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Open Notebook

    Open Notebook

    An Open Source implementation of Notebook LM with more flexibility

    Open Notebook is an open-source, privacy-focused alternative to Google’s Notebook LM that gives users full control over their research and AI workflows. Designed to be self-hosted, it ensures complete data sovereignty by keeping your content local or within your own infrastructure. The platform supports 16+ AI providers—including OpenAI, Anthropic, Ollama, Google, and LM Studio—allowing flexible model choice and cost optimization.
    Downloads: 33 This Week
    Last Update:
    See Project
  • 8
    ai-renamer

    ai-renamer

    A Node.js CLI that uses Ollama and LM Studio models

    ...Instead of relying on manual naming or metadata, the tool analyzes the actual content of files, including images, videos, and documents, to generate descriptive and context-aware filenames. It integrates with local and cloud-based AI providers such as Ollama, LM Studio, and OpenAI, allowing users to choose between offline and API-based workflows depending on their needs. The tool supports batch processing, making it particularly useful for organizing large collections of files quickly and efficiently. It also provides customization options such as naming conventions, language preferences, and prompt modifications to tailor the output to specific use cases. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9

    askimo

    AI desktop app with local RAG, privacy-first, multi-model support

    Askimo is an open-source, privacy-first AI desktop application designed to help users work with multiple AI models from a single, consistent interface. It supports popular AI providers such as OpenAI, Anthropic (Claude), Gemini, Ollama, LocalAI, Docker AI, LM Studio, and X AI, allowing users to switch models easily without vendor lock-in. A core feature of Askimo is Retrieval-Augmented Generation (RAG). Users can connect the app to local files, documents, and project folders so the AI can answer questions using their own knowledge base. All indexing, search, and history storage happen locally, ensuring sensitive data stays on the user’s machine. ...
    Leader badge
    Downloads: 24 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 10
    Nanocoder

    Nanocoder

    A beautiful local-first coding agent running in your terminal

    ...Built with TypeScript and distributed as a CLI application, nanocoder enables developers to interact with AI agents that can read files, modify code, execute commands, and assist with debugging tasks. The platform supports multiple AI providers through OpenAI-compatible APIs and can also integrate with local model runtimes such as Ollama or LM Studio. Its architecture emphasizes extensibility through custom commands and integration with Model Context Protocol servers that allow the AI agent to access additional tools.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 11
    Crush

    Crush

    The glamourous AI CLI coding agent for your favourite terminal 💘

    Crush is a next-generation, terminal-based AI coding assistant developed by Charm, designed to seamlessly integrate with your tools, workflows, and preferred LLMs. It provides developers with an intuitive, session-based experience where multiple contexts can be managed across projects. With flexible model switching, Crush allows you to change providers mid-session while retaining conversation history. It enhances productivity by combining LSP (Language Server Protocol) support with...
    Downloads: 22 This Week
    Last Update:
    See Project
  • 12
    ClaraVerse

    ClaraVerse

    Claraverse is a opesource privacy focused ecosystem to replace ChatGPT

    ...The platform combines chat interfaces, workflow automation, and long-running task management into a single application that can connect to both local and cloud-based AI models. Users can integrate models from multiple providers such as OpenAI, Anthropic, Google, or locally hosted systems like Ollama and LM Studio, enabling flexibility in how AI capabilities are deployed and managed. The system includes a visual workflow builder that allows users to create automation pipelines where AI tools interact with external services, APIs, or datasets. ClaraVerse also includes task-tracking capabilities that allow complex research, coding, or analysis jobs to run in the background while users monitor their progress through a dashboard.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    kolosal

    kolosal

    Open Source and Lightweight Local LLM Platform

    Kolosal AI is the leading open-source local LLM platform. Download, train, and run local LLM models on your device with no cloud dependencies. An opensource and lightweight alternative to LM Studio.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14

    Spyder AI Chat Plugin

    Spyder IDE plugin providing separate chat pane for AI Assistance

    An OpenAI-compatible chat pane for Spyder 6.x. Supports OpenAI, Ollama, LM Studio, and any other server that exposes an OpenAI-compatible /v1/chat/completions endpoint. Installation with PyPi in the same environment as Spyder IDE: (spyder) $ pip install spyder-ai-chat Or from source: # clone / download and unzip the project source code, then: (spyder) $ cd spyder_ai_chat (spyder) $ pip install -e .
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    ChatAnyLLM

    ChatAnyLLM

    Switch local model, cloud provider, or custom agent mid-conversation.

    ChatAnyLLM is a desktop GUI application for local inference engines (Ollama, LM Studio, OpenClaw) and cloud providers like OpenRouter. Users may manually configure any OpenAI-compatible API endpoint, connecting third-party providers such as Groq or Cerebras. The application stores conversation history locally and saves api keys with system-level encryption. It supports reasoning models, multimodal inputs, and formatting for LaTeX and code.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 16
    Email to Event - ETE

    Email to Event - ETE

    The python App/Skrypt automaticly add important events into calendar.

    ...Fuly tested on Seznam.cz* services, if you have difrent provier with same type of security it will be working. *Email is using standart IMAP, Calendar use iCalendar API and authentification method. Fast setup: 1. Download and unpack 2. Install LM studio - recomended for GPU compute 3. Run run_setings.bat and set your authentificators for email***/calendar and etg. 4. Push button SAVE 5. Push button PLAN for add task to Time scheduler 6. Check by run run_ETE.bat **Model must understand your language, test before use! ***In emal seting(usualy on web) create a new folder and set auto COPY! ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Maestro

    Maestro

    Offline AI orchestration with a modern UI & model integration

    LM-Kit Maestro is a powerful offline desktop application that lets you orchestrate AI agents directly on your local machine using a modern, clean interface. Built on the robust LM-Kit.NET framework with .NET MAUI and Razor, Maestro enables you to create personalized chatbots and conversational agents while ensuring your data remains secure with no external transfers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    gpt-oss-20b

    gpt-oss-20b

    OpenAI’s compact 20B open model for fast, agentic, and local use

    GPT-OSS-20B is OpenAI’s smaller, open-weight language model optimized for low-latency, agentic tasks, and local deployment. With 21B total parameters and 3.6B active parameters (MoE), it fits within 16GB of memory thanks to native MXFP4 quantization. Designed for high-performance reasoning, it supports Harmony response format, function calling, web browsing, and code execution. Like its larger sibling (gpt-oss-120b), it offers adjustable reasoning depth and full chain-of-thought visibility...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    gpt-oss-120b

    gpt-oss-120b

    OpenAI’s open-weight 120B model optimized for reasoning and tooling

    ...The model supports fine-tuning, chain-of-thought reasoning, and structured outputs, making it ideal for complex workflows. It operates in OpenAI’s Harmony response format and can be deployed via Transformers, vLLM, Ollama, LM Studio, and PyTorch. Developers can control the reasoning level (low, medium, high) to balance speed and depth depending on the task. Released under the Apache 2.0 license, it enables both commercial and research applications. The model supports function calling, web browsing, and code execution, streamlining intelligent agent development.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB