FlashInfer: Kernel Library for LLM Serving
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
MII makes low-latency and high-throughput inference possible
Integrate, train and manage any AI models and APIs with your database
Pytorch domain library for recommendation systems
Multilingual Automatic Speech Recognition with word-level timestamps
Uncover insights, surface problems, monitor, and fine tune your LLM
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Library for OCR-related tasks powered by Deep Learning
Unified Model Serving Framework
Libraries for applying sparsification recipes to neural networks
An easy-to-use LLMs quantization package with user-friendly apis
Phi-3.5 for Mac: Locally-run Vision and Language Models
Replace OpenAI GPT with another LLM in your app
State-of-the-art diffusion models for image and audio generation
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Large Language Model Text Generation Inference
Images to inference with no labeling
Trainable models and NN optimization tools
Probabilistic reasoning and statistical analysis in TensorFlow
Build your chatbot within minutes on your favorite device
Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers