Athina AI
Athina is a collaborative AI development platform that enables teams to build, test, and monitor AI applications efficiently. It offers features such as prompt management, evaluation tools, dataset handling, and observability, all designed to streamline the development of reliable AI systems. Athina supports integration with various models and services, including custom models, and ensures data privacy through fine-grained access controls and self-hosted deployment options. The platform is SOC-2 Type 2 compliant, providing a secure environment for AI development. Athina's user-friendly interface allows both technical and non-technical team members to collaborate effectively, accelerating the deployment of AI features.
Learn more
Selene 1
Atla's Selene 1 API offers state-of-the-art AI evaluation models, enabling developers to define custom evaluation criteria and obtain precise judgments on their AI applications' performance. Selene outperforms frontier models on commonly used evaluation benchmarks, ensuring accurate and reliable assessments. Users can customize evaluations to their specific use cases through the Alignment Platform, allowing for fine-grained analysis and tailored scoring formats. The API provides actionable critiques alongside accurate evaluation scores, facilitating seamless integration into existing workflows. Pre-built metrics, such as relevance, correctness, helpfulness, faithfulness, logical coherence, and conciseness, are available to address common evaluation scenarios, including detecting hallucinations in retrieval-augmented generation applications or comparing outputs to ground truth data.
Learn more
Prompt flow
Prompt Flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, and evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality. With Prompt Flow, you can create flows that link LLMs, prompts, Python code, and other tools together in an executable workflow. It allows for debugging and iteration of flows, especially tracing interactions with LLMs with ease. You can evaluate your flows, calculate quality and performance metrics with larger datasets, and integrate the testing and evaluation into your CI/CD system to ensure quality. Deployment of flows to the serving platform of your choice or integration into your app’s code base is made easy. Additionally, collaboration with your team is facilitated by leveraging the cloud version of Prompt Flow in Azure AI.
Learn more
doteval
doteval is an AI-assisted evaluation workspace that simplifies the creation of high-signal evaluations, alignment of LLM judges, and definition of rewards for reinforcement learning, all within a single platform. It offers a Cursor-like experience to edit evaluations-as-code against a YAML schema, enabling users to version evaluations across checkpoints, replace manual effort with AI-generated diffs, and compare evaluation runs on tight execution loops to align them with proprietary data. doteval supports the specification of fine-grained rubrics and aligned graders, facilitating rapid iteration and high-quality evaluation datasets. Users can confidently determine model upgrades or prompt improvements and export specifications for reinforcement learning training. It is designed to accelerate the evaluation and reward creation process by 10 to 100 times, making it a valuable tool for frontier AI teams benchmarking complex model tasks.
Learn more