Run Local LLMs on Any Device. Open-source
FlashInfer: Kernel Library for LLM Serving
A high-throughput and memory-efficient inference and serving engine
A library for accelerating Transformer models on NVIDIA GPUs
AIMET is a library that provides advanced quantization and compression
The official Python client for the Huggingface Hub
Operating LLMs in production
Easiest and laziest way for building multi-agent LLMs applications
Large Language Model Text Generation Inference
Simplifies the local serving of AI models from any source
Multilingual Automatic Speech Recognition with word-level timestamps
Uncover insights, surface problems, monitor, and fine tune your LLM
Standardized Serverless ML Inference Platform on Kubernetes
Everything you need to build state-of-the-art foundation models
GPU environment management and cluster orchestration
Optimizing inference proxy for LLMs
Official inference library for Mistral models
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
20+ high-performance LLMs with recipes to pretrain, finetune at scale
A set of Docker images for training and serving models in TensorFlow
PyTorch library of curated Transformer models and their components
Training and deploying machine learning models on Amazon SageMaker
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Phi-3.5 for Mac: Locally-run Vision and Language Models
Neural Network Compression Framework for enhanced OpenVINO