Easiest and laziest way for building multi-agent LLMs applications
Large Language Model Text Generation Inference
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Visual Instruction Tuning: Large Language-and-Vision Assistant
Openai style api for open large language models
Library for serving Transformers models on Amazon SageMaker
Standardized Serverless ML Inference Platform on Kubernetes
A high-performance ML model serving framework, offers dynamic batching
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Open platform for training, serving, and evaluating language models
LLM Chatbot Assistant for Openfire server
Serve machine learning models within a Docker container
Toolkit for allowing inference and serving with MXNet in SageMaker
CPU/GPU inference server for Hugging Face transformer models
Deploy a ML inference service on a budget in 10 lines of code