Lastest supports multiple AI providers for test generation, fixing, and diff analysis.
| Provider | Description | Cost |
|---|---|---|
| Claude CLI | Uses the claude CLI tool installed on your machine |
Per your Anthropic plan |
| Anthropic Direct | Direct API calls to Anthropic | Per API usage |
| OpenRouter | Access multiple models via OpenRouter | Per API usage |
| Claude Agent SDK | Uses Anthropic's Agent SDK | Per API usage |
| OpenAI | Direct API calls to OpenAI (GPT-4o, etc.) | Per API usage |
| Ollama | Run local models with zero API cost | Free (local compute) |
claude CLI: follow Anthropic's instructionsollama pull llama3)You can use a different AI provider for diff analysis than for test generation. This is useful when:
Add custom instructions that are included in every AI prompt:
Full audit trail of all AI requests and responses:
| Operation | AI Required? |
|---|---|
| Recording tests | No |
| Running tests | No |
| Diffing screenshots | No |
| Generating tests from URL | Yes |
| Fixing broken tests | Yes |
| Enhancing tests | Yes |
| Play Agent autonomy | Yes |
| Spec-driven generation | Yes |
| AI diff analysis | Yes (optional) |
| Route discovery | Yes |
| AI failure triage | Yes (optional) |
| Codebase intelligence | Yes |
Running tests never requires AI -- it's pure Playwright execution.
When AI generates tests, Lastest can automatically detect your project context to enrich prompts:
This runs automatically during Play Agent and AI-assisted test generation.
The Vars tab on every test now hosts three kinds of data sources:
| Type | Example | When to use |
|---|---|---|
| Static | email = "qa@example.com" |
Hard-coded test data |
| AI-generated (with presets) | customer_name, address, iban -- AI fills realistic values per run |
Avoid hard-coding when a class of value matters more than a specific value |
| Google Sheets-backed | A column from a Sheet pinned to the test | Shared, collaboratively edited data |
AI-generated vars use deterministic seeding when timestamp / random freezing is enabled, so they remain stable for diffing. Sheets vars and connected sources are surfaced inline on the test page so reviewers can see what data drove a given run.
See [Google Sheets Integration] for the Sheets side of this.
Wiki: Agent Monitoring
Wiki: Google Sheets Integration
Wiki: Home
Wiki: Settings Reference
Wiki: _Sidebar