Athina AI
Athina is a collaborative AI development platform that enables teams to build, test, and monitor AI applications efficiently. It offers features such as prompt management, evaluation tools, dataset handling, and observability, all designed to streamline the development of reliable AI systems. Athina supports integration with various models and services, including custom models, and ensures data privacy through fine-grained access controls and self-hosted deployment options. The platform is SOC-2 Type 2 compliant, providing a secure environment for AI development. Athina's user-friendly interface allows both technical and non-technical team members to collaborate effectively, accelerating the deployment of AI features.
Learn more
Orq.ai
Orq.ai is the #1 platform for software teams to operate agentic AI systems at scale. Optimize prompts, deploy use cases, and monitor performance, no blind spots, no vibe checks. Experiment with prompts and LLM configurations before moving to production. Evaluate agentic AI systems in offline environments. Roll out GenAI features to specific user groups with guardrails, data privacy safeguards, and advanced RAG pipelines. Visualize all events triggered by agents for fast debugging. Get granular control on cost, latency, and performance. Connect to your favorite AI models, or bring your own. Speed up your workflow with out-of-the-box components built for agentic AI systems. Manage core stages of the LLM app lifecycle in one central platform. Self-hosted or hybrid deployment with SOC 2 and GDPR compliance for enterprise security.
Learn more
Maxim
Maxim is an agent simulation, evaluation, and observability platform that empowers modern AI teams to deploy agents with quality, reliability, and speed.
Maxim's end-to-end evaluation and data management stack covers every stage of the AI lifecycle, from prompt engineering to pre & post release testing and observability, data-set creation & management, and fine-tuning.
Use Maxim to simulate and test your multi-turn workflows on a wide variety of scenarios and across different user personas before taking your application to production.
Features:
Agent Simulation
Agent Evaluation
Prompt Playground
Logging/Tracing Workflows
Custom Evaluators- AI, Programmatic and Statistical
Dataset Curation
Human-in-the-loop
Use Case:
Simulate and test AI agents
Evals for agentic workflows: pre and post-release
Tracing and debugging multi-agent workflows
Real-time alerts on performance and quality
Creating robust datasets for evals and fine-tuning
Human-in-the-loop workflows
Learn more
Vellum AI
Bring LLM-powered features to production with tools for prompt engineering, semantic search, version control, quantitative testing, and performance monitoring. Compatible across all major LLM providers. Quickly develop an MVP by experimenting with different prompts, parameters, and even LLM providers to quickly arrive at the best configuration for your use case. Vellum acts as a low-latency, highly reliable proxy to LLM providers, allowing you to make version-controlled changes to your prompts – no code changes needed. Vellum collects model inputs, outputs, and user feedback. This data is used to build up valuable testing datasets that can be used to validate future changes before they go live. Dynamically include company-specific context in your prompts without managing your own semantic search infra.
Learn more