Supercharge Your LLM Application Evaluations
Debug, evaluate, and monitor your LLMapps, RAG systems, and agentic AI
Evaluate and monitor ML models from validation to production
See where your AI coding tokens go
Visual tool for building, testing, and deploying AI agent workflows
TONL (Token-Optimized Notation Language)
The React for Voice and Chat, build apps for Alexa, Google Assistant
Outcome driven agent development framework that evolves
Python SDK for agent monitoring, LLM cost tracking, benchmarking, etc.
Open source platform for managing, testing, and deploying AI apps
Personal AI Notebooks. Organize files & webpages and generate notes
Host Agent for AWS CodeDeploy
Next-generation AI Agent Optimization Platform
Open source LLM-Observability Platform for Developers
Open-source, developer-first LLMOps platform
Run a full local LLM stack with one command using Docker
Run Coding Agents in Sandboxes
Web app for interacting with any LangGraph agent (PY & TS) via a chat
A minimal yet professional single agent demo project
Open source codebase for Scale Agentex
Spatiotemporal Signal Processing with Neural Machine Learning Models
Best practices on recommendation systems
Local AI file organization with categorization and rename suggestions
Bench is a tool for evaluating LLMs for production use cases
Visual Automation IDE — automate anything you see on screen