Showing 110 open source projects for "inference"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 1
    CodeGeeX2

    CodeGeeX2

    CodeGeeX2: A More Powerful Multilingual Code Generation Model

    ...The model excels at code generation, translation, summarization, debugging, and comment generation, and it supports over 100 programming languages. With improved inference efficiency, quantization options, and multi-query/flash attention, CodeGeeX2 achieves faster generation speeds and lightweight deployment, requiring as little as 6GB GPU memory at INT4 precision. Its backend powers the CodeGeeX IDE plugins for VS Code, JetBrains, and other editors, offering developers interactive AI assistance with features like infilling and cross-file completion.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    ...The repo provides multiple ways to run models (CLI, web demo, and OpenAI-Vision–style APIs), along with quantization options that reduce VRAM needs (e.g., 4-bit). It includes checkpoints for chat, base, and grounding variants, plus recipes for model-parallel inference and LoRA fine-tuning. The documentation covers task prompts for general dialogue, visual grounding (box→caption, caption→box, caption+boxes), and GUI agent workflows that produce structured actions with bounding boxes.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    CogVLM2

    CogVLM2

    GPT4V-level open-source multi-modal model based on Llama3-8B

    ...It supports bilingual interaction (Chinese and English) and has open-source versions optimized for dialogue and video comprehension. Notably, the Int4 quantized version allows efficient inference on GPUs with only 16GB of memory. The repository offers demos, API servers, fine-tuning examples, and integration with OpenAI API-compatible endpoints, making it accessible for both researchers and developers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    GLM-4.7

    GLM-4.7

    Advanced language and coding AI model

    GLM-4.7 is an advanced agent-oriented large language model designed as a high-performance coding and reasoning partner. It delivers significant gains over GLM-4.6 in multilingual agentic coding, terminal-based workflows, and real-world developer benchmarks such as SWE-bench and Terminal Bench 2.0. The model introduces stronger “thinking before acting” behavior, improving stability and accuracy in complex agent frameworks like Claude Code, Cline, and Roo Code. GLM-4.7 also advances “vibe...
    Downloads: 80 This Week
    Last Update:
    See Project
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 5
    VisualGLM-6B

    VisualGLM-6B

    Chinese and English multimodal conversational language model

    ...Trained on a large bilingual dataset — including 30 million high-quality Chinese image-text pairs from CogView and 300 million English pairs — VisualGLM-6B is designed for image understanding, description, and question answering. Fine-tuning on long visual QA datasets further aligns the model’s responses with human preferences. The repository provides inference APIs, command-line demos, web demos, and efficient fine-tuning options like LoRA, QLoRA, and P-tuning. It also supports quantization down to INT4, enabling local deployment on consumer GPUs with as little as 6.3 GB VRAM.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Tongyi DeepResearch

    Tongyi DeepResearch

    Tongyi Deep Research, the Leading Open-source Deep Research Agent

    ...It uses a mix of synthetic data generation, fine-tuning and reinforcement learning; supports benchmarks like web search, document understanding, question answering, “agentic” tasks; provides inference tools, evaluation scripts, and “web agent” style interfaces. The aim is to enable more autonomous, agentic models that can perform sustained knowledge gathering, reasoning, and synthesis across multiple modalities (web, files, etc.).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    CodeGeeX

    CodeGeeX

    CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)

    ...CodeGeeX also powers IDE plugins for VS Code and JetBrains, offering features like code completion, translation, debugging, and annotation. The model supports Ascend 910 and NVIDIA GPUs, with optimizations like quantization and FasterTransformer acceleration for faster inference.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 8
    self-llm

    self-llm

    Tutorial tailored for Chinese babies on rapid fine-tuning

    ...The repository focuses on helping beginners and developers understand how to run and customize modern LLMs locally rather than relying solely on hosted APIs. It provides step-by-step tutorials covering environment setup, model deployment, inference workflows, and efficient fine-tuning techniques such as LoRA and parameter-efficient training. The project also includes guides for integrating models into real applications, including command-line interfaces, web demos, and frameworks like LangChain. By combining theory, configuration instructions, and runnable examples, self-llm lowers the barrier to entry for students and engineers who want to experiment with open-source models.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 9
    Free LLM API resources

    Free LLM API resources

    A list of free LLM inference resources accessible via API

    Free LLM API resources repository curated by cheahjs is a community-driven index of free and open API endpoints, tools, datasets, runtimes, and utilities for working with large language models (LLMs) without cost-barriers. It collects a wide range of resources including hosted free-tier LLM APIs, documentation links, public model endpoints, open datasets useful for training or evaluation, tooling integrations, and examples showing how to interact with these services in real applications....
    Downloads: 2 This Week
    Last Update:
    See Project
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 10
    LLMs-from-scratch

    LLMs-from-scratch

    Implement a ChatGPT-like LLM in PyTorch from scratch, step by step

    ...The focus is on readability, correctness, and experimentation, making it ideal for students and practitioners transitioning from theory to working systems. By the end, you have a grounded sense of how data pipelines, optimization, and inference interact to produce fluent text.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    Dynamiq

    Dynamiq

    An orchestration framework for agentic AI and LLM applications

    ...The framework supports the creation of multi-agent systems where different AI agents collaborate to solve tasks such as information retrieval, document analysis, or automated decision making. Dynamiq also includes built-in support for retrieval-augmented generation pipelines that allow models to access external documents and knowledge bases during inference.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    chatd

    chatd

    Chat with your documents using local AI

    chatd is an open-source desktop application that allows users to interact with their documents through a locally running large language model. The software focuses on privacy and security by ensuring that all document processing and inference occur entirely on the user’s computer without sending data to external cloud services. It includes a built-in integration with the Ollama runtime, which provides a cross-platform environment for running large language models locally. The application typically runs models such as Mistral-7B and allows users to load and analyze documents while asking questions in natural language. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    uqlm

    uqlm

    Uncertainty Quantification for Language Models, is a Python package

    ...Black-box methods evaluate model outputs through multiple generations or comparative analysis, while white-box methods rely on token probabilities produced during inference. UQLM also supports ensemble strategies and model-as-judge approaches for evaluating responses. By combining multiple uncertainty metrics, the system provides more reliable indicators of when language model outputs may be unreliable.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    FlagEmbedding

    FlagEmbedding

    Retrieval and Retrieval-augmented LLMs

    ...FlagEmbedding includes a family of models known as BGE (BAAI General Embedding), which are designed to achieve strong performance across multilingual and cross-lingual retrieval benchmarks. The toolkit provides infrastructure for inference, fine-tuning, evaluation, and dataset preparation, enabling developers to train custom embedding models for specific domains or applications. It also includes reranker models that refine search results by re-evaluating candidate documents using cross-encoder architectures, improving retrieval accuracy in complex queries.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    llm.c

    llm.c

    LLM training in simple, raw C/CUDA

    ...By stripping away heavy frameworks, it exposes the core math and memory flows of embeddings, attention, and feed-forward layers. The code illustrates how to wire forward passes, losses, and simple training or inference loops with direct control over arrays and buffers. Its compact design makes it easy to trace execution, profile hotspots, and understand the cost of each operation. Portability is a goal: it aims to compile with common toolchains and run on modest hardware for small experiments. Rather than delivering a production-grade stack, it serves as a reference and learning scaffold for people who want to “see the metal” behind LLMs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    ChatGLM2-6B

    ChatGLM2-6B

    ChatGLM2-6B: An Open Bilingual Chat LLM

    ChatGLM2-6B is the second-gen Chinese-English conversational LLM from ZhipuAI/Tsinghua. It upgrades the base model with GLM’s hybrid pretraining objective, 1.4 TB bilingual data, and preference alignment—delivering big gains on MMLU, CEval, GSM8K, and BBH. The context window extends up to 32K (FlashAttention), and Multi-Query Attention improves speed and memory use. The repo includes Python APIs, CLI & web demos, OpenAI-style/FASTAPI servers, and quantized checkpoints for lightweight local...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    LLM-Pruner

    LLM-Pruner

    On the Structural Pruning of Large Language Models

    LLM-Pruner is an open-source framework designed to compress large language models through structured pruning techniques while maintaining their general capabilities. Large language models often require enormous computational resources, making them expensive to deploy and inefficient for many practical applications. LLM-Pruner addresses this issue by identifying and removing non-essential components within transformer architectures, such as redundant attention heads or feed-forward...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    CodeLlama

    CodeLlama

    Inference code for CodeLlama models

    Code Llama is a family of Llama-based code models optimized for programming tasks such as code generation, completion, and repair, with variants specialized for base coding, Python, and instruction following. The repo documents the sizes and capabilities (e.g., 7B, 13B, 34B) and highlights features like infilling and large input context to support real IDE workflows. It targets both general software synthesis and language-specific productivity, offering strong performance among open models...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    MobileLLM

    MobileLLM

    MobileLLM Optimizing Sub-billion Parameter Language Models

    ...The framework integrates several architectural innovations—SwiGLU activation, deep and thin network design, embedding sharing, and grouped-query attention (GQA)—to achieve a superior trade-off between model size, inference speed, and accuracy. MobileLLM demonstrates remarkable performance, with the 125M and 350M variants outperforming previous state-of-the-art models of the same scale by up to 4.3% on zero-shot commonsense reasoning tasks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    TTRL

    TTRL

    Test-Time Reinforcement Learning

    TTRL is an open-source framework for test-time reinforcement learning in large language models, with a particular focus on reasoning tasks where ground-truth labels are not available during inference. The project addresses the problem of how to generate useful reward signals from unlabeled test-time data, and its central insight is that common test-time scaling practices such as majority voting can be repurposed into reward estimates for online reinforcement learning. This makes the framework especially interesting for scenarios where models must keep adapting during evaluation or deployment instead of relying only on fixed pretraining and static fine-tuning. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    RAPTOR

    RAPTOR

    The official implementation of RAPTOR

    ...Each level of the tree represents summaries at different levels of abstraction, allowing retrieval to operate at both detailed and high-level conceptual layers. During inference, the system can navigate this hierarchical representation to retrieve information that best matches the user’s query while preserving broader contextual understanding. This approach improves question-answering performance on complex tasks that require reasoning across long documents or multiple sources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    LLMs-Zero-to-Hero

    LLMs-Zero-to-Hero

    From nobody to big model (LLM) hero

    LLMs-Zero-to-Hero is an open-source educational project designed to guide learners through the complete process of understanding and building large language models from the ground up. The repository presents a structured learning pathway that begins with fundamental concepts in machine learning and progresses toward advanced topics such as model pre-training, fine-tuning, and deployment. Rather than relying entirely on existing frameworks, the project encourages readers to implement...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    MoBA

    MoBA

    MoBA: Mixture of Block Attention for Long-Context LLMs

    MoBA, short for Mixture of Block Attention, is an open-source research implementation of a novel attention mechanism designed to improve the efficiency of large language models processing extremely long contexts. The architecture adapts ideas from Mixture-of-Experts networks and applies them directly to the attention mechanism of transformer models. Instead of forcing each token to attend to every other token in the sequence, MoBA divides the context into blocks and dynamically routes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    VLMEvalKit

    VLMEvalKit

    Open-source evaluation toolkit of large multi-modality models (LMMs)

    ...Instead of requiring complex data preparation pipelines or multiple repositories for each benchmark, the system enables evaluation through simple commands that automatically handle dataset loading, model inference, and metric computation. VLMEvalKit supports generation-based evaluation methods, allowing models to produce textual responses to visual inputs while measuring performance through techniques such as exact matching or language-model-assisted answer extraction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    JamAI Base

    JamAI Base

    The collaborative spreadsheet for AI

    JamAI Base is an open-source backend platform designed to simplify the development of retrieval-augmented generation systems and AI-driven applications. The platform integrates both a relational database and a vector database into a single embedded architecture, allowing developers to store structured data alongside semantic embeddings. It includes built-in orchestration for large language models, vector search, and reranking pipelines so that AI applications can retrieve relevant...
    Downloads: 1 This Week
    Last Update:
    See Project