Bench is a tool for evaluating LLMs for production use cases. Whether you are comparing different LLMs, considering different prompts, or testing generation hyperparameters like temperature and # tokens, Bench provides one touch point for all your LLM performance evaluation.

Features

  • To standardize the workflow of LLM evaluation with a common interface across tasks and use cases
  • To test whether open source LLMs can do as well as the top closed-source LLM API providers on your specific data
  • To translate the rankings on LLM leaderboards and benchmarks into scores that you care about for your actual use case
  • Bench provides one touch point for all your LLM performance evaluation
  • Install Bench to your python environment with optional dependencies for serving results locally
  • Alternatively, install Bench to your python environment with minimum dependencies

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Arthur Bench

Arthur Bench Web Site

Other Useful Business Software
Stop Cyber Threats with VM-Series Next-Gen Firewall on Azure Icon
Stop Cyber Threats with VM-Series Next-Gen Firewall on Azure

Native application identity and user-based security for your Azure cloud

Gain integrated visibility across all traffic in a single pass. Deploy Palo Alto Networks VM-Series to determine application identity and content while automating security policy updates via rich APIs.
Get a free trial
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Arthur Bench!

Additional Project Details

Programming Language

TypeScript

Related Categories

TypeScript Artificial Intelligence Software

Registered

2023-08-21