Get inferencing running on Kubernetes: LLMs, Embeddings, Speech-to-Text. KubeAI serves an OpenAI compatible HTTP API. Admins can configure ML models by using the Model Kubernetes Custom Resources. KubeAI can be thought of as a Model Operator (See Operator Pattern) that manages vLLM and Ollama servers.
Features
- Drop-in replacement for OpenAI with API compatibility
- Serve top OSS models (LLMs, Whisper, etc.)
- Multi-platform: CPU-only, GPU, coming soon: TPU
- Scale from zero, autoscale based on load
- Zero dependencies (does not depend on Istio, Knative, etc.)
- Chat UI included (OpenWebUI)
- Operates OSS model servers (vLLM, Ollama, FasterWhisper, Infinity)
- Stream/batch inference via messaging integrations (Kafka, PubSub, etc.)
License
Apache License V2.0Follow KubeAI
Other Useful Business Software
Build Securely on AWS with Proven Frameworks
Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of KubeAI!