Showing 36 open source projects for "chat"

View related business solutions
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 1
    DeepSeek LLM

    DeepSeek LLM

    DeepSeek LLM: Let there be answers

    The DeepSeek-LLM repository hosts the code, model files, evaluations, and documentation for DeepSeek’s LLM series (notably the 67B Chat variant). Its tagline is “Let there be answers.” The repo includes an “evaluation” folder (with results like math benchmark scores) and code artifacts (e.g. pre-commit config) that support model development and deployment. According to the evaluation files, DeepSeek LLM 67B Chat achieves strong performance on math benchmarks under both chain-of-thought (CoT) and tool-assisted reasoning modes. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    OpenAI Quickstart Python

    OpenAI Quickstart Python

    Python example app from the OpenAI API quickstart tutorial

    openai-quickstart-python is an official OpenAI repository containing multiple Python quickstart applications that demonstrate how to use different OpenAI API endpoints, including Chat and Assistants. It provides practical, beginner-friendly examples to help developers quickly learn how to send requests, handle responses, and build basic applications using the OpenAI Python SDK. The examples folder includes small, self-contained projects showcasing common use cases like chat completions, tool usage, and interactive interfaces. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    GLM-4.7

    GLM-4.7

    Advanced language and coding AI model

    ...Its tool-use capabilities are substantially enhanced, with notable improvements in browsing, search, and tool-integrated reasoning tasks. Overall, GLM-4.7 shows broad performance upgrades across coding, reasoning, chat, creative writing, and role-play scenarios.
    Downloads: 757 This Week
    Last Update:
    See Project
  • 4
    Qwen-Audio

    Qwen-Audio

    Chat & pretrained large audio language model proposed by Alibaba Cloud

    Qwen-Audio is a large audio-language model developed by Alibaba Cloud, built to accept various types of audio input (speech, natural sounds, music, singing) along with text input, and output text. There is also an instruction-tuned version called Qwen-Audio-Chat which supports conversational interaction (multi-round), audio + text input, creative tasks and reasoning over audio. It uses multi-task training over many different audio tasks (30+), and achieves strong multi-benchmarks performance without task-specific fine‐tuning. It includes features such as flexible multi-run chat, audio understanding/reasoning, music appreciation, and also tool usage (e.g. voice editing).
    Downloads: 1 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 5
    Qwen

    Qwen

    The official repo of Qwen chat & pretrained large language model

    Qwen is a series of large language models developed by Alibaba Cloud, consisting of various pretrained versions like Qwen-1.8B, Qwen-7B, Qwen-14B, and Qwen-72B. These models, which range from smaller to larger configurations, are designed for a wide range of natural language processing tasks. They are openly available for research and commercial use, with Qwen's code and model weights shared on GitHub. Qwen's capabilities include text generation, comprehension, and conversation, making it a...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 6
    Qwen2-Audio

    Qwen2-Audio

    Repo of Qwen2-Audio chat & pretrained large audio language model

    Qwen2-Audio is a large audio-language model by Alibaba Cloud, part of the Qwen series. It is trained to accept various audio signal inputs (including speech, sounds, etc.) and perform both voice chat and audio analysis, producing textual responses. It supports two major modes: Voice Chat (interactive voice only input) and Audio Analysis (audio + text instructions), with both base and instruction-tuned models. It is evaluated on many benchmarks (speech recognition, translation, sound classification, emotion, etc.), and offers pretrained models (e.g. 7B) released via ModelScope and Hugging Face. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    GPT Discord Bot

    GPT Discord Bot

    Example Discord bot written in Python that uses the completions API

    GPT Discord Bot is an example project from OpenAI that shows how to integrate the OpenAI API with Discord using Python. The bot uses the Chat Completions API (defaulting to gpt-3.5-turbo) to carry out conversational interactions and the Moderations API to filter user messages. It is built on top of the discord.py framework and the OpenAI Python library, providing a simple, extensible template for building AI-powered Discord applications. The bot supports a /chat command that spawns a public thread, carries full conversation context across messages, and gracefully closes the thread when context or message limits are reached. ...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 8
    gpt-oss

    gpt-oss

    gpt-oss-120b and gpt-oss-20b are two open-weight language models

    ...The repository provides multiple reference implementations—including PyTorch, Triton, and Metal—for educational and experimental use, as well as example clients and tools like a terminal chat app and a Responses API server.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 9
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    ...The repo provides multiple ways to run models (CLI, web demo, and OpenAI-Vision–style APIs), along with quantization options that reduce VRAM needs (e.g., 4-bit). It includes checkpoints for chat, base, and grounding variants, plus recipes for model-parallel inference and LoRA fine-tuning. The documentation covers task prompts for general dialogue, visual grounding (box→caption, caption→box, caption+boxes), and GUI agent workflows that produce structured actions with bounding boxes.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Incredable is the first DLT-secured platform that allows you to save time, eliminate errors, and ensure your organization is compliant all in one place. Icon
    Incredable is the first DLT-secured platform that allows you to save time, eliminate errors, and ensure your organization is compliant all in one place.

    For healthcare Providers and Facilities

    Incredable streamlines and simplifies the complex process of medical credentialing for hospitals and medical facilities, helping you save valuable time, reduce costs, and minimize risks. With Incredable, you can effortlessly manage all your healthcare providers and their credentials within a single, unified platform. Our state-of-the-art technology ensures top-notch data security, giving you peace of mind.
    Learn More
  • 10
    ChatGPT Retrieval Plugin

    ChatGPT Retrieval Plugin

    The ChatGPT Retrieval Plugin lets you easily find personal documents

    The chatgpt-retrieval-plugin repository implements a semantic retrieval backend that lets ChatGPT (or GPT-powered tools) access private or organizational documents in natural language by combining vector search, embedding models, and plugin infrastructure. It can serve as a custom GPT plugin or function-calling backend so that a chat session can “look up” relevant documents based on user queries, inject those results into context, and respond more knowledgeably about a private knowledge base. The repo provides code for ingestion pipelines (embedding documents), APIs for querying, local server components, and privacy / PII detection modules. It also contains plugin manifest files (OpenAPI spec, plugin JSON) so that the retrieval backend can be registered in a plugin ecosystem. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Anthropic SDK Python

    Anthropic SDK Python

    Provides convenient access to the Anthropic REST API from any Python 3

    The anthropic-sdk-python repository is the official Python client library for interacting with the Anthropic (Claude) REST API. It is designed to provide a user-friendly, type-safe, and asynchronous/synchronous capable interface for making chat/completion requests to models like Claude. The library includes definitions for all request and response parameters using Python typed objects, automatically handles serialization and deserialization, and wraps HTTP logic (timeouts, retries, error mapping) so that developers can call the API in a clean, high-level way. The SDK supports both synchronous and asynchronous usage (via async/await) depending on context. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    DeepSeek MoE

    DeepSeek MoE

    Towards Ultimate Expert Specialization in Mixture-of-Experts Language

    ...For example, their MoE variant with 16.4B parameters claims comparable or better performance to standard dense models like DeepSeek 7B or LLaMA2 7B using about 40% of the total compute. The repo publishes both Base and Chat variants of the 16B MoE model (deepseek-moe-16b) and provides evaluation results across benchmarks. It also includes a quick start with inference instructions (using Hugging Face Transformers) and guidance on fine-tuning (DeepSpeed, hyperparameters, quantization). The licensing is MIT for code, with a “Model License” applied to the models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    MiMo-V2-Flash

    MiMo-V2-Flash

    MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation

    ...It uses an MoE setup where a very large total parameter count is available, but only a smaller subset is activated per token, which helps balance capability with runtime efficiency. The project positions the model for workflows that require tool use, multi-step planning, and higher throughput, rather than only single-turn chat. Architecturally, it highlights attention and prediction choices aimed at accelerating generation while preserving instruction-following quality in complex prompts. The repository typically serves as a launch point for running the model, understanding its intended use cases, and reproducing or extending its evaluation on reasoning and agent-style tasks. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    LLamaSharp

    LLamaSharp

    C#/.NET binding of llama.cpp, including LLaMa/GPT model inference

    The C#/.NET binding of llama.cpp. It provides APIs to infer the LLaMa Models and deploy it on the local environment. It works on both Windows, Linux and MAC without the requirement for compiling llama.cpp yourself. Its performance is close to llama.cpp. Furthermore, it provides integrations with other projects such as BotSharp to provide higher-level applications and UI.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    VibeThinker

    VibeThinker

    Diversity-driven optimization and large-model reasoning ability

    ...It contains about 1.5 billion parameters, far smaller than many “frontier” models, yet it is explicitly optimized for reasoning, mathematics, and code generation tasks rather than general open-domain chat. The innovation lies in its training methodology: the team uses what they call the Spectrum-to-Signal Principle (SSP), where a first stage emphasizes diversity of reasoning paths (the “spectrum” phase) and a second stage uses reinforcement techniques (the “signal” phase) to refine toward correctness and strong reasoning. The result is a model that outpaces many much larger models on domain-specific benchmarks, demonstrating that smaller models, if trained carefully and with the right objectives, can achieve high performance in reasoning-centric tasks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Anthropic SDK TypeScript

    Anthropic SDK TypeScript

    Access to Anthropic's safety-first language model APIs

    anthropic-sdk-typescript is the TypeScript / JavaScript client library for the Anthropic REST API, enabling backend or Node.js usage of models like Claude. It wraps API endpoints for creating messages, streaming responses, and managing parameters in a type-safe TS environment. The library is designed for server-side use, interfacing with REST, and is stable for integration in web services or backend agents. Example usage shows how to instantiate the Anthropic client, call...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Qwen-VL

    Qwen-VL

    Chat & pretrained large vision language model

    Qwen-VL is Alibaba Cloud’s vision-language large model family, designed to integrate visual and linguistic modalities. It accepts image inputs (with optional bounding boxes) and text, and produces text (and sometimes bounding boxes) as output. The model variants (VL-Plus, VL-Max, etc.) have been upgraded for better visual reasoning, text recognition from images, fine-grained understanding, and support for high image resolutions / extreme aspect ratios. Qwen-VL supports multilingual inputs...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    GLM-4-32B-0414

    GLM-4-32B-0414

    Open Multilingual Multimodal Chat LMs

    GLM-4-32B-0414 is a powerful open-source large language model featuring 32 billion parameters, designed to deliver performance comparable to leading models like OpenAI’s GPT series. It supports multilingual and multimodal chat capabilities with an extensive 32K token context length, making it ideal for dialogue, reasoning, and complex task completion. The model is pre-trained on 15 trillion tokens of high-quality data, including substantial synthetic reasoning datasets, and further enhanced with reinforcement learning and human preference alignment for improved instruction-following and function calling. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    Alpaca.cpp

    Alpaca.cpp

    Locally run an Instruction-Tuned Chat-Style LLM

    ...This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface. Download the zip file corresponding to your operating system from the latest release. The weights are based on the published fine-tunes from alpaca-lora, converted back into a PyTorch checkpoint with a modified script and then quantized with llama.cpp the regular way.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 20
    Nemotron 3

    Nemotron 3

    Large language model developed and released by NVIDIA

    NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 is a state-of-the-art large language model developed and released by NVIDIA as part of its Nemotron 3 family, optimized for high-efficiency inference and strong reasoning performance in open AI workloads. It is the post-trained and FP8-quantized variant of the Nemotron 3 Nano model, meaning its weights and activations are represented in 8-bit floating point (FP8) to dramatically reduce memory usage and computational cost while retaining high accuracy. The...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Qwen2-7B-Instruct

    Qwen2-7B-Instruct

    Instruction-tuned 7B language model for chat and complex tasks

    Qwen2-7B-Instruct is a 7.62-billion-parameter instruction-tuned language model from the Qwen2 series developed by Alibaba's Qwen team. Built on a transformer architecture with SwiGLU activation and group query attention, it is optimized for chat, reasoning, coding, multilingual tasks, and extended context understanding up to 131,072 tokens. The model was pretrained on a large-scale dataset and aligned via supervised fine-tuning and direct preference optimization. It shows strong performance across benchmarks such as MMLU, MT-Bench, GSM8K, and Humaneval, often surpassing similarly sized open-source models. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Grok-2.5

    Grok-2.5

    Large-scale xAI model for local inference with SGLang, Grok-2.5

    ...Grok-2.5 supports advanced inference with multi-GPU configurations, requiring at least 8 GPUs with more than 40 GB of memory each for optimal performance. It integrates with the SGLang framework to enable serving, testing, and chat-style interactions. The model comes with a post-training architecture and requires the correct chat template to function properly. It is released under the Grok 2 Community License Agreement, encouraging community experimentation and responsible use.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    translategemma-4b-it

    translategemma-4b-it

    Lightweight multimodal translation model for 55 languages

    ...With a compact ~5B parameter footprint and BF16 support, the model is designed to run efficiently on laptops, desktops, and private cloud infrastructure, making advanced translation accessible without heavy hardware requirements. TranslateGemma uses a structured chat template that enforces explicit source and target language codes, ensuring consistent, deterministic behavior and reducing ambiguity in multilingual pipelines. It integrates seamlessly with Hugging Face Transformers through pipelines or direct model initialization, supporting GPU acceleration and scalable deployment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Ministral 3 8B Instruct 2512

    Ministral 3 8B Instruct 2512

    Compact 8B multimodal instruct model optimized for edge deployment

    ...It combines an 8.4B-parameter language model with a 0.4B vision encoder, enabling both text reasoning and image understanding. This FP8 instruct-fine-tuned variant is optimized for chat, instruction following, and structured outputs, making it ideal for daily assistant tasks and lightweight agentic workflows. Designed for edge deployment, the model can run on a wide range of hardware and fits locally on a single 12GB GPU, with the option for even smaller quantized configurations. Its multilingual support covers dozens of major languages, allowing it to work across diverse global environments and applications. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Ministral 3 3B Instruct 2512

    Ministral 3 3B Instruct 2512

    Ultra-efficient 3B multimodal instruct model built for edge deployment

    ...It includes a 3.4B-parameter language model paired with a 0.4B vision encoder, enabling it to understand both text and visual inputs. As an FP8 instruct-fine-tuned model, it is optimized for chat, instruction following, and compact agentic tasks while maintaining strong adherence to system prompts. Despite its small size, it delivers efficient real-time performance and can run locally on a single 8GB GPU, with further memory reductions through quantization. It supports dozens of languages across major global regions, making it well-suited for multilingual and embedded applications. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next