Run Local LLMs on Any Device. Open-source
Ready-to-use OCR with 80+ supported languages
A high-throughput and memory-efficient inference and serving engine
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Library for OCR-related tasks powered by Deep Learning
Deep learning optimization library: makes distributed training easy
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Data manipulation and transformation for audio signal processing
Phi-3.5 for Mac: Locally-run Vision and Language Models
A Pythonic framework to simplify AI service building
Operating LLMs in production
An MLOps framework to package, deploy, monitor and manage models
State-of-the-art diffusion models for image and audio generation
FlashInfer: Kernel Library for LLM Serving
Everything you need to build state-of-the-art foundation models
Uncover insights, surface problems, monitor, and fine tune your LLM
Unified Model Serving Framework
A set of Docker images for training and serving models in TensorFlow
A lightweight vision library for performing large object detection
Single-cell analysis in Python
Trainable models and NN optimization tools
Easiest and laziest way for building multi-agent LLMs applications
A unified framework for scalable computing
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Replace OpenAI GPT with another LLM in your app