The Triton Inference Server provides an optimized cloud
Query MCP enables end-to-end management of Supabase via chat interface
A simple native web interface that uses ChatTTS to synthesize text
Openai style api for open large language models
WhatsApp MCP server enabling AI access to chats and messaging
Free, high-quality text-to-speech API endpoint to replace OpenAI
Easiest and laziest way for building multi-agent LLMs applications
High-performance inference server for text embeddings models API layer
MCP server that integrates Confluence and Jira
Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model
Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML
TensorRT LLM provides users with an easy-to-use Python API
Official MiniMax Model Context Protocol (MCP) server
Python SDK for the Computer Use model Lux, developed by OpenAGI
Low-latency REST API for serving text-embeddings
The official gpt4free repository
Provides convenient access to the Anthropic REST API from any Python 3
gpt-oss-120b and gpt-oss-20b are two open-weight language models
Package and deploy machine learning models using Docker containers
Full-stack AI Red Teaming platform
Large Language Model Text Generation Inference
Voice Recognition to Text Tool
A TTS that fits in your CPU (and pocket)
Open platform connecting AI agents to tools via unified MCP server
Speech-AI-Forge is a project developed around TTS generation model