FlashInfer: Kernel Library for LLM Serving
A set of Docker images for training and serving models in TensorFlow
A Pythonic framework to simplify AI service building
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Integrate, train and manage any AI models and APIs with your database
Pytorch domain library for recommendation systems
Operating LLMs in production
Bring the notion of Model-as-a-Service to life
Lightweight Python library for adding real-time multi-object tracking
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
OpenMMLab Model Deployment Framework
A high-performance ML model serving framework, offers dynamic batching
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Framework that is dedicated to making neural data processing
Libraries for applying sparsification recipes to neural networks
Library for OCR-related tasks powered by Deep Learning
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
Large Language Model Text Generation Inference
Images to inference with no labeling
Trainable models and NN optimization tools
Probabilistic reasoning and statistical analysis in TensorFlow