A scalable inference server for models optimized with OpenVINO
Easiest and laziest way for building multi-agent LLMs applications
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
Large Language Model Text Generation Inference
Low-latency REST API for serving text-embeddings
Visual Instruction Tuning: Large Language-and-Vision Assistant
Openai style api for open large language models
Standardized Serverless ML Inference Platform on Kubernetes
Library for serving Transformers models on Amazon SageMaker
LLM Chatbot Assistant for Openfire server
Serve machine learning models within a Docker container
Toolkit for allowing inference and serving with MXNet in SageMaker
CPU/GPU inference server for Hugging Face transformer models
Deploy a ML inference service on a budget in 10 lines of code