Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
User-friendly AI Interface
A high-throughput and memory-efficient inference and serving engine
ONNX Runtime: cross-platform, high performance ML inferencing
Self-hosted, community-driven, local OpenAI compatible API
High-performance neural network inference framework for mobile
OpenVINO™ Toolkit repository
Protect and discover secrets using Gitleaks
Single-cell analysis in Python
Everything you need to build state-of-the-art foundation models
Bring the notion of Model-as-a-Service to life
Connect home devices into a powerful cluster to accelerate LLM
Official inference library for Mistral models
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Easiest and laziest way for building multi-agent LLMs applications
A Pythonic framework to simplify AI service building
An MLOps framework to package, deploy, monitor and manage models
Pure C++ implementation of several models for real-time chatting
An Open-Source Programming Framework for Agentic AI
Uncover insights, surface problems, monitor, and fine tune your LLM
Low-latency REST API for serving text-embeddings
Standardized Serverless ML Inference Platform on Kubernetes
Bayesian inference with probabilistic programming