DeepEval
DeepEval is a simple-to-use, open source LLM evaluation framework, for evaluating and testing large-language model systems. It is similar to Pytest but specialized for unit testing LLM outputs. DeepEval incorporates the latest research to evaluate LLM outputs based on metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., which uses LLMs and various other NLP models that run locally on your machine for evaluation. Whether your application is implemented via RAG or fine-tuning, LangChain, or LlamaIndex, DeepEval has you covered. With it, you can easily determine the optimal hyperparameters to improve your RAG pipeline, prevent prompt drifting, or even transition from OpenAI to hosting your own Llama2 with confidence. The framework supports synthetic dataset generation with advanced evolution techniques and integrates seamlessly with popular frameworks, allowing for efficient benchmarking and optimization of LLM systems.
Learn more
Arena.ai
Arena is a community-powered platform designed to evaluate AI models based on real-world usage and feedback. Created by researchers from UC Berkeley, it enables users to test and compare frontier AI models across various tasks. The platform gathers insights from millions of builders, researchers, and creative professionals to generate transparent performance rankings. Arena’s public leaderboard reflects how models perform in practical scenarios rather than controlled benchmarks. Users can compare models side by side and provide feedback that helps shape future AI development. It supports a wide range of use cases, including text generation, coding, image creation, and video production. By leveraging collective input, Arena advances the understanding and improvement of AI technologies.
Learn more
doteval
doteval is an AI-assisted evaluation workspace that simplifies the creation of high-signal evaluations, alignment of LLM judges, and definition of rewards for reinforcement learning, all within a single platform. It offers a Cursor-like experience to edit evaluations-as-code against a YAML schema, enabling users to version evaluations across checkpoints, replace manual effort with AI-generated diffs, and compare evaluation runs on tight execution loops to align them with proprietary data. doteval supports the specification of fine-grained rubrics and aligned graders, facilitating rapid iteration and high-quality evaluation datasets. Users can confidently determine model upgrades or prompt improvements and export specifications for reinforcement learning training. It is designed to accelerate the evaluation and reward creation process by 10 to 100 times, making it a valuable tool for frontier AI teams benchmarking complex model tasks.
Learn more
LLM Scout
LLM Scout is an evaluation and analysis platform designed to help users benchmark, compare, and interpret the performance of large language models across diverse tasks, datasets, and real-world prompts within a unified environment. It enables side-by-side comparisons of models by measuring accuracy, reasoning, factuality, bias, safety, and other key metrics using customizable evaluation suites, curated benchmarks, and domain-specific tests. It supports the ingestion of user-provided data and queries so teams can assess how different models respond to their own real-world workflows or industry-specific needs, and visualize outputs in an intuitive dashboard that highlights performance trends, strengths, and weaknesses. LLM Scout also includes tools for analyzing token usage, latency, cost implications, and model behavior under varied conditions, helping stakeholders make informed decisions about which models best fit specific applications or quality requirements.
Learn more