Openai style api for open large language models
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
Uncover insights, surface problems, monitor, and fine tune your LLM
Everything you need to build state-of-the-art foundation models
The official Python client for the Huggingface Hub
FlashInfer: Kernel Library for LLM Serving
A high-performance ML model serving framework, offers dynamic batching
Tensor search for humans
Neural Network Compression Framework for enhanced OpenVINO
Official inference library for Mistral models
Bring the notion of Model-as-a-Service to life
Data manipulation and transformation for audio signal processing
Deep learning optimization library: makes distributed training easy
Operating LLMs in production
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
The Triton Inference Server provides an optimized cloud
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Large Language Model Text Generation Inference
Library for OCR-related tasks powered by Deep Learning
State-of-the-art diffusion models for image and audio generation
Open-source tool designed to enhance the efficiency of workloads
A Pythonic framework to simplify AI service building
A set of Docker images for training and serving models in TensorFlow