Get inferencing running on Kubernetes: LLMs, Embeddings, Speech-to-Text. KubeAI serves an OpenAI compatible HTTP API. Admins can configure ML models by using the Model Kubernetes Custom Resources. KubeAI can be thought of as a Model Operator (See Operator Pattern) that manages vLLM and Ollama servers.

Features

  • Drop-in replacement for OpenAI with API compatibility
  • Serve top OSS models (LLMs, Whisper, etc.)
  • Multi-platform: CPU-only, GPU, coming soon: TPU
  • Scale from zero, autoscale based on load
  • Zero dependencies (does not depend on Istio, Knative, etc.)
  • Chat UI included (OpenWebUI)
  • Operates OSS model servers (vLLM, Ollama, FasterWhisper, Infinity)
  • Stream/batch inference via messaging integrations (Kafka, PubSub, etc.)

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow KubeAI

KubeAI Web Site

Other Useful Business Software
Build Securely on AWS with Proven Frameworks Icon
Build Securely on AWS with Proven Frameworks

Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
Download Now
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of KubeAI!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Go

Related Categories

Go Large Language Models (LLM), Go LLM Inference Tool

Registered

2024-09-25