Official inference library for Mistral models
Replace OpenAI GPT with another LLM in your app
The Triton Inference Server provides an optimized cloud
High-performance inference server for text embeddings models API layer
Large Language Model Text Generation Inference
Library for serving Transformers models on Amazon SageMaker
A high-throughput and memory-efficient inference and serving engine
FlashInfer: Kernel Library for LLM Serving
C++ library for high performance inference on NVIDIA GPUs
AlphaFold 3 inference pipeline
Port of Facebook's LLaMA model in C/C++
Optimizing inference proxy for LLMs
Deep learning optimization library: makes distributed training easy
ONNX Runtime: cross-platform, high performance ML inferencing
Port of OpenAI's Whisper model in C/C++
A general-purpose probabilistic programming system
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
A high-performance inference system for large language models
AirLLM 70B inference with single 4GB GPU
High-performance reactive message-passing based Bayesian engine
Bayesian inference with probabilistic programming
Standardized Serverless ML Inference Platform on Kubernetes
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
High-Resolution Image Synthesis with Latent Diffusion Models
Ready-to-use OCR with 80+ supported languages