A set of Docker images for training and serving models in TensorFlow
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Integrate, train and manage any AI models and APIs with your database
Operating LLMs in production
Pytorch domain library for recommendation systems
Lightweight Python library for adding real-time multi-object tracking
Bring the notion of Model-as-a-Service to life
OpenMMLab Model Deployment Framework
A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Libraries for applying sparsification recipes to neural networks
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
Large Language Model Text Generation Inference
Images to inference with no labeling
Trainable models and NN optimization tools
Probabilistic reasoning and statistical analysis in TensorFlow
Build your chatbot within minutes on your favorite device
Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers
Multilingual Automatic Speech Recognition with word-level timestamps