High performance Twitch bot in Rust
Delivery infrastructure for agentic apps
Serialize repositories into LLM-ready context w/ smart prioritization
Rust async runtime based on io-uring
The Rust workspace under rust/ is the current systems-language port
Instant, controllable, local pre-trained AI models in Rust
High-performance API combining reasoning and creative AI models
Python-free Rust inference server
Shinkai allows you to create advanced AI (local) agents effortlessly
An e-book about the real-world application of LLM
A reactive runtime for building durable AI agents
High-performance runtime for data analytics applications