FlashInfer: Kernel Library for LLM Serving
PyTorch library of curated Transformer models and their components
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
LLM training code for MosaicML foundation models
Optimizing inference proxy for LLMs
Low-latency REST API for serving text-embeddings
Build your chatbot within minutes on your favorite device
Easiest and laziest way for building multi-agent LLMs applications
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Library for OCR-related tasks powered by Deep Learning
Tensor search for humans
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
LLMFlows - Simple, Explicit and Transparent LLM Apps
Run 100B+ language models at home, BitTorrent-style
Framework for Accelerating LLM Generation with Multiple Decoding Heads
A computer vision framework to create and deploy apps in minutes
Implementation of "Tree of Thoughts
Implementation of model parallel autoregressive transformers on GPUs
CPU/GPU inference server for Hugging Face transformer models