MII makes low-latency and high-throughput inference possible
FlashInfer: Kernel Library for LLM Serving
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Multilingual Automatic Speech Recognition with word-level timestamps
Uncover insights, surface problems, monitor, and fine tune your LLM
Integrate, train and manage any AI models and APIs with your database
Pytorch domain library for recommendation systems
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Unified Model Serving Framework
Easy-to-use deep learning framework with 3 key features
Phi-3.5 for Mac: Locally-run Vision and Language Models
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Libraries for applying sparsification recipes to neural networks
An easy-to-use LLMs quantization package with user-friendly apis
Replace OpenAI GPT with another LLM in your app
State-of-the-art diffusion models for image and audio generation
Optimizing inference proxy for LLMs
Large Language Model Text Generation Inference
Images to inference with no labeling
Trainable models and NN optimization tools
Probabilistic reasoning and statistical analysis in TensorFlow
Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers
PyTorch extensions for fast R&D prototyping and Kaggle farming