Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
AI interface for tinkerers (Ollama, Haystack RAG, Python)
Optimizing inference proxy for LLMs
User-friendly AI Interface
The free, Open Source alternative to OpenAI, Claude and others
LLMs as Copilots for Theorem Proving in Lean
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
Framework which allows you transform your Vector Database
LLM.swift is a simple and readable library
AICI: Prompts as (Wasm) Programs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Large Language Model Text Generation Inference
Build your chatbot within minutes on your favorite device
Easiest and laziest way for building multi-agent LLMs applications
lightweight, standalone C++ inference engine for Google's Gemma models
20+ high-performance LLMs with recipes to pretrain, finetune at scale
A library to communicate with ChatGPT, Claude, Copilot, Gemini
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
Simplifies the local serving of AI models from any source
Easy-to-use Speech Toolkit including Self-Supervised Learning model
An MLOps framework to package, deploy, monitor and manage models
Serving system for machine learning models