Run Local LLMs on Any Device. Open-source
State-of-the-art diffusion models for image and audio generation
A Pythonic framework to simplify AI service building
Standardized Serverless ML Inference Platform on Kubernetes
The official Python client for the Huggingface Hub
Openai style api for open large language models
DoWhy is a Python library for causal inference
The unofficial python package that returns response of Google Bard
Integrate, train and manage any AI models and APIs with your database
A unified framework for scalable computing
LLMFlows - Simple, Explicit and Transparent LLM Apps
Framework for Accelerating LLM Generation with Multiple Decoding Heads
Lightweight anchor-free object detection model
Training & Implementation of chatbots leveraging GPT-like architecture
CPU/GPU inference server for Hugging Face transformer models
Deploy a ML inference service on a budget in 10 lines of code