Showing 49 open source projects for "engine"

View related business solutions
  • Train ML Models With SQL You Already Know Icon
    Train ML Models With SQL You Already Know

    BigQuery automates data prep, analysis, and predictions with built-in AI assistance.

    Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
    Try Free
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    Agentic Context Engine

    Agentic Context Engine

    Make your agents learn from experience

    Agentic Context Engine (ACE) is an open-source framework designed to help AI agents improve their performance by learning from their own execution history. Instead of relying solely on model training or fine-tuning, the framework focuses on structured context engineering, allowing agents to accumulate knowledge from past successes and failures during task execution.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 2
    MiroFish

    MiroFish

    A Simple and Universal Swarm Intelligence Engine

    MiroFish is a next-generation artificial intelligence prediction engine that leverages multi-agent technology and swarm-intelligence simulation to model, simulate, and forecast complex real-world scenarios. The system extracts “seed” information from sources such as breaking news, policy documents, and market signals to construct a high-fidelity digital parallel world populated by thousands of virtual agents with independent memory and behavior rules.
    Downloads: 1,049 This Week
    Last Update:
    See Project
  • 3
    LLM Workflow Engine

    LLM Workflow Engine

    Power CLI and Workflow manager for LLMs (core package)

    ...Developers can construct structured workflows using configuration files and integrate them with tools such as Ansible playbooks or custom scripts to automate complex tasks. The engine supports multiple AI providers through a plugin architecture, allowing connections to services like OpenAI, Hugging Face, Cohere, or other compatible APIs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    RTP-LLM

    RTP-LLM

    Alibaba's high-performance LLM inference engine for diverse apps

    ...The framework is designed for large-scale AI services and is already used internally across several Alibaba platforms such as Taobao, Amap, and other business systems that rely on conversational or search-related AI services. RTP-LLM supports a wide variety of modern model architectures, including Qwen, DeepSeek, and Llama-based models, making it a flexible engine for deploying many different open-source LLMs.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 5
    Anyquery

    Anyquery

    Query anything (GitHub, Notion, +40 more) with SQL and let LLMs

    Anyquery is an open-source SQL query engine designed to allow users to query data from almost any source using a unified SQL interface. The system enables developers and analysts to run SQL queries on files, APIs, applications, and databases without needing separate connectors or query languages for each platform. Built on top of SQLite, the engine uses a plugin architecture that allows it to extend support to dozens of external services and data sources.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 6
    MLC LLM

    MLC LLM

    Universal LLM Deployment Engine with ML Compilation

    MLC LLM is a machine learning compiler and deployment framework designed to enable efficient execution of large language models across a wide range of hardware platforms. The project focuses on compiling models into optimized runtimes that can run natively on devices such as GPUs, mobile processors, browsers, and edge hardware. By leveraging machine learning compilation techniques, mlc-llm produces high-performance inference engines that maintain consistent APIs across platforms. The system...
    Downloads: 30 This Week
    Last Update:
    See Project
  • 7
    LOTUS

    LOTUS

    AI-Powered Data Processing: Use LOTUS to process all of your datasets

    ...These operators allow tasks such as semantic filtering, ranking, clustering, and summarization to be expressed directly within data processing pipelines. The LOTUS engine automatically optimizes how language models are used during execution, which can significantly improve performance and reduce computational cost.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    Jlama

    Jlama

    Jlama is a modern LLM inference engine for Java

    Jlama is a modern inference engine written entirely in Java that enables developers to run large language models locally within Java applications. Unlike frameworks that require external APIs or remote services, Jlama performs inference directly on a machine using pre-trained models. This allows organizations to integrate generative AI features into their systems while maintaining full control over data privacy and infrastructure.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Xtuner

    Xtuner

    A Next-Generation Training Engine Built for Ultra-Large MoE Models

    Xtuner is a large-scale training engine designed for efficient training and fine-tuning of modern large language models, particularly mixture-of-experts architectures. The framework focuses on enabling scalable training for extremely large models while maintaining efficiency across distributed computing environments. Unlike traditional 3D parallel training strategies, XTuner introduces optimized parallelism techniques that simplify scaling and reduce system complexity when training massive models. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 10
    vLLM

    vLLM

    A high-throughput and memory-efficient inference and serving engine

    vLLM is a fast and easy-to-use library for LLM inference and serving. High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more.
    Downloads: 50 This Week
    Last Update:
    See Project
  • 11
    Mooncake

    Mooncake

    Mooncake is the serving platform for Kimi

    ...The platform was originally developed as part of the serving infrastructure for the Kimi large language model system. Its architecture centers on a high-performance transfer engine that provides unified data transfer across different storage and networking technologies. This engine enables efficient movement of tensors and model data across heterogeneous environments such as GPU memory, system memory, and distributed storage systems. Mooncake also introduces distributed key-value cache storage that allows inference systems to reuse previously computed attention states, significantly improving throughput in large-scale deployments. ...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 12
    SimpleLLM

    SimpleLLM

    950 line, minimal, extensible LLM inference engine built from scratch

    SimpleLLM is a minimal, extensible large language model inference engine implemented in roughly 950 lines of code, built from scratch to serve both as a learning tool and a research platform for novel inference techniques. It provides the core components of an LLM runtime—such as tokenization, batching, and asynchronous execution—without the abstraction overhead of more complex engines, making it easier for developers and researchers to understand and modify.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    Nano-vLLM

    Nano-vLLM

    A lightweight vLLM implementation built from scratch

    Nano-vLLM is a lightweight implementation of the vLLM inference engine designed to run large language models efficiently while maintaining a minimal and readable codebase. The project recreates the core functionality of vLLM in a simplified architecture written in approximately a thousand lines of Python, making it easier for developers and researchers to understand how modern LLM inference systems work.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Chitu

    Chitu

    High-performance inference framework for large language models

    Chitu is a high-performance inference engine designed to deploy and run large language models efficiently in production environments. The framework focuses on improving efficiency, flexibility, and scalability for organizations that need to run LLM inference workloads across different hardware platforms. It supports heterogeneous computing environments, including CPUs, GPUs, and various specialized AI accelerators, allowing models to run across a wide range of infrastructure configurations. ...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 15
    SAG

    SAG

    SQL-Driven RAG Engine

    SAG is an open-source SQL-driven retrieval-augmented generation engine that dynamically constructs knowledge graphs during query processing. Instead of relying on a static knowledge graph prepared in advance, the system automatically builds relational structures between entities while processing user queries. Documents are first decomposed into atomic semantic events, which are then represented using multidimensional natural language vectors.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    uzu

    uzu

    A high-performance inference engine for AI models

    uzu is a high-performance inference engine designed to run artificial intelligence models efficiently on Apple Silicon hardware. Written primarily in Rust and leveraging Apple’s Metal framework, the project focuses on maximizing performance when executing large language models and other AI workloads on devices such as Mac computers with M-series chips. The engine implements a hybrid architecture in which model layers can be executed either as custom GPU kernels or through Apple’s MPSGraph API, allowing it to balance performance and compatibility depending on the workload. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    WFGY 3.0

    WFGY 3.0

    A tension reasoning engine over 131 S-class problems

    WFGY is an experimental open-source reasoning framework designed to improve the reliability and interpretability of large language model outputs through structured reasoning layers. The project introduces a conceptual reasoning engine that analyzes complex problems by identifying semantic compression errors and residual assumptions within a system’s reasoning process. Its architecture treats reasoning failures as measurable signals that can be detected and analyzed rather than simply observed as incorrect answers. Different versions of the framework, including WFGY 1.0, 2.0, and 3.0, represent stages of development where early conceptual ideas evolved into more structured reasoning engines and diagnostic tools. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    GraphRAG

    GraphRAG

    A modular graph-based Retrieval-Augmented Generation (RAG) system

    The GraphRAG project is a data pipeline and transformation suite that is designed to extract meaningful, structured data from unstructured text using the power of LLMs.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 19
    SeaGOAT

    SeaGOAT

    local-first semantic code search engine

    SeaGOAT is an open-source semantic code search engine designed to help developers explore and understand large codebases more efficiently. Instead of relying solely on traditional keyword search, it uses vector embeddings to represent the meaning of code and queries, allowing users to perform semantic searches that find relevant code even when the exact keywords are not present. The tool runs locally on a developer’s machine and processes repositories using a combination of embedding models and conventional search utilities, enabling both semantic and text-based retrieval methods. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    Farfalle

    Farfalle

    AI search engine - self-host with local or cloud LLMs

    Farfalle is an open-source AI-powered search engine designed to provide an answer-centric search experience similar to modern conversational search systems. The project integrates large language models with multiple search APIs so that the system can gather information from external sources and synthesize responses into concise answers. It can run either with local language models or with cloud-based providers, allowing developers to deploy it privately or integrate with hosted AI services. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    mllm

    mllm

    Fast Multimodal LLM on Mobile Devices

    mllm is an open-source inference engine designed to run multimodal large language models efficiently on mobile devices and edge computing environments. The framework focuses on delivering high-performance AI inference in resource-constrained systems such as smartphones, embedded hardware, and lightweight computing platforms. Implemented primarily in C and C++, it is designed to operate with minimal external dependencies while taking advantage of hardware-specific acceleration technologies such as ARM NEON and x86 AVX2 instructions. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    mistral.rs

    mistral.rs

    Fast, flexible LLM inference

    mistral.rs is a fast and flexible LLM inference engine implemented in Rust, designed to run and serve modern language models with an emphasis on performance and practical deployment. It provides multiple entry points for developers, including a CLI for running models locally and an HTTP server that exposes an OpenAI-compatible API surface for easy integration with existing clients. The project includes hardware-aware tooling that can benchmark a system and choose sensible quantization and device-mapping strategies, helping users get strong performance without manual tuning. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Emscripten

    Emscripten

    Emscripten: An LLVM-to-WebAssembly Compiler

    ...Emscripten provides Web support for popular portable APIs such as OpenGL and SDL2, allowing complex graphical native applications to be ported, such as the Unity game engine and Google Earth. It can probably port your codebase, too.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 24
    MyScaleDB

    MyScaleDB

    A @ClickHouse fork that supports high-performance vector search

    MyScaleDB is an open-source SQL vector database designed for building large-scale AI and machine learning applications that require both analytical queries and semantic vector search. The system is built on top of the ClickHouse database engine and extends it with specialized indexing and search capabilities optimized for vector embeddings. This design allows developers to store structured data, unstructured text, and high-dimensional vector embeddings within a single database platform. MyScaleDB enables developers to perform vector similarity searches using standard SQL syntax, eliminating the need to learn specialized vector database query languages. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    node-llama-cpp

    node-llama-cpp

    Run AI models locally on your machine with node.js bindings for llama

    node-llama-cpp is a JavaScript and Node.js binding that allows developers to run large language models locally using the high-performance inference engine provided by llama.cpp. The library enables applications built with Node.js to interact directly with local LLM models without requiring a remote API or external service. By using native bindings and optimized model execution, the framework allows developers to integrate advanced language model capabilities into desktop applications, server software, and command-line tools. ...
    Downloads: 7 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB