CLI proxy that reduces LLM token consumption
The AI-native (edge and LLM) proxy for agents
Prevents outdated Rust code suggestions from AI assistants
Delivery infrastructure for agentic apps
Distributed LLM and StableDiffusion inference
High-performance, multiplayer code editor from the creators of Atom
Built for demanding AI workflows
Next Generation Agentic Proxy for AI Agents and MCP servers
Python-free Rust inference server
Fast, flexible LLM inference
High-performance inference server for text embeddings models API layer
Command-line tool for Drive, Gmail, Calendar, Sheets, Docs, Chat, etc.
Fast ML inference & training for ONNX models in Rust
Rust async runtime based on io-uring
Toolkits to create a human-in-the-loop approval layer
Convert codebases into structured prompts optimized for LLM analysis
Fast and efficient unstructured data extraction