Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Everything you need to build state-of-the-art foundation models
FlashInfer: Kernel Library for LLM Serving
Simplifies the local serving of AI models from any source
Neural Network Compression Framework for enhanced OpenVINO
Low-latency REST API for serving text-embeddings
Efficient few-shot learning with Sentence Transformers
Operating LLMs in production
DoWhy is a Python library for causal inference
Multilingual Automatic Speech Recognition with word-level timestamps
A high-performance ML model serving framework, offers dynamic batching
Integrate, train and manage any AI models and APIs with your database
Official inference library for Mistral models
20+ high-performance LLMs with recipes to pretrain, finetune at scale
GPU environment management and cluster orchestration
PyTorch library of curated Transformer models and their components
State-of-the-art Parameter-Efficient Fine-Tuning
Uncover insights, surface problems, monitor, and fine tune your LLM
Deep learning optimization library: makes distributed training easy
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
LLM training code for MosaicML foundation models
An easy-to-use LLMs quantization package with user-friendly apis
A set of Docker images for training and serving models in TensorFlow
Uplift modeling and causal inference with machine learning algorithms