Delivery infrastructure for agentic apps
High performance Twitch bot in Rust
The best way to use and work with blocks
AI gateway with token compression for Claude Code, Codex, and more
The Rust workspace under rust/ is the current systems-language port
Rust async runtime based on io-uring
Convert codebases into structured prompts optimized for LLM analysis
Instant, controllable, local pre-trained AI models in Rust
Python-free Rust inference server
A CLI tool for tracking token usage from OpenCode, Claude Code
High-performance API combining reasoning and creative AI models
Shinkai allows you to create advanced AI (local) agents effortlessly
A reactive runtime for building durable AI agents
An e-book about the real-world application of LLM
High-performance runtime for data analytics applications