Menu

AI Configuration

Viktor Fási

AI Configuration

Lastest supports multiple AI providers for test generation, fixing, and diff analysis.


AI Providers

Provider Description Cost
Claude CLI Uses the claude CLI tool installed on your machine Per your Anthropic plan
Anthropic Direct Direct API calls to Anthropic Per API usage
OpenRouter Access multiple models via OpenRouter Per API usage
Claude Agent SDK Uses Anthropic's Agent SDK Per API usage
OpenAI Direct API calls to OpenAI (GPT-4o, etc.) Per API usage
Ollama Run local models with zero API cost Free (local compute)

Setting Up AI

  1. Go to Settings > AI
  2. Select your test generation provider
  3. Enter the required API key or configuration
  4. Optionally select a separate diff analysis provider
  5. Settings auto-save after 500ms

Claude CLI

  • Install the claude CLI: follow Anthropic's instructions
  • No API key needed in Lastest -- it uses your CLI authentication

Anthropic Direct

  • Enter your Anthropic API key
  • Select the model (defaults to latest Claude)

OpenRouter

  • Enter your OpenRouter API key
  • Select from available models

OpenAI

  • Enter your OpenAI API key
  • Select from available models (GPT-4o, etc.)

Ollama

  • Install Ollama on your machine
  • Pull a model (e.g., ollama pull llama3)
  • Lastest connects to the local Ollama server automatically
  • Zero API cost -- runs entirely on your hardware

Separate Diff Provider

You can use a different AI provider for diff analysis than for test generation. This is useful when:

  • You want a cheaper/faster model for diff classification
  • You want to use a local model (Ollama) for diffs but cloud AI for generation
  • You want to keep diff analysis costs separate

Custom Instructions

Add custom instructions that are included in every AI prompt:

  • Guide the AI about your app's structure
  • Specify preferred selectors or patterns
  • Add context about your testing requirements

AI Prompt Logs

Full audit trail of all AI requests and responses:

  • View the last 50 AI interactions
  • See exactly what was sent and received
  • Debug AI-generated tests
  • Available in Settings > AI Logs

When AI is Used

Operation AI Required?
Recording tests No
Running tests No
Diffing screenshots No
Generating tests from URL Yes
Fixing broken tests Yes
Enhancing tests Yes
Play Agent autonomy Yes
Spec-driven generation Yes
AI diff analysis Yes (optional)
Route discovery Yes
AI failure triage Yes (optional)
Codebase intelligence Yes

Running tests never requires AI -- it's pure Playwright execution.


Codebase Intelligence

When AI generates tests, Lastest can automatically detect your project context to enrich prompts:

  • Framework (Next.js, React, Vue, etc.)
  • CSS framework (Tailwind, CSS Modules, styled-components)
  • Auth mechanism (NextAuth, Clerk, Firebase, etc.)
  • State management (Redux, Zustand, etc.)
  • API layer (REST, GraphQL, tRPC)
  • Key dependencies with testing implications (100+ package database)

This runs automatically during Play Agent and AI-assisted test generation.


AI Test Data Variables

The Vars tab on every test now hosts three kinds of data sources:

Type Example When to use
Static email = "qa@example.com" Hard-coded test data
AI-generated (with presets) customer_name, address, iban -- AI fills realistic values per run Avoid hard-coding when a class of value matters more than a specific value
Google Sheets-backed A column from a Sheet pinned to the test Shared, collaboratively edited data

AI-generated vars use deterministic seeding when timestamp / random freezing is enabled, so they remain stable for diffing. Sheets vars and connected sources are surfaced inline on the test page so reviewers can see what data drove a given run.

See [Google Sheets Integration] for the Sheets side of this.


Related

Wiki: Agent Monitoring
Wiki: Google Sheets Integration
Wiki: Home
Wiki: Settings Reference
Wiki: _Sidebar

MongoDB Logo MongoDB