Lastest offers three ways to create tests, from fully manual to fully autonomous.
Open the recorder, click through your app, hit stop. Lastest captures every interaction and generates deterministic Playwright code -- no AI involved, no API keys needed. You own the test code and can edit it by hand.
How to use:
Best for: Teams that don't want AI, air-gapped environments, simple flows.
AI generates, fixes, or enhances tests -- but you review and approve before anything is saved.
Capabilities:
How to use:
The "Enhance with AI" flow now streams progress through the Lastest MCP server: as AI plans new test cases the real test names appear in the UI before the code is generated, and selectors are validated against the live page via MCP tooling rather than guessed. This drastically reduces "test renamed in code but still labelled untitled-3 in the UI" drift and surfaces selector failures while AI is still iterating.
Best for: Day-to-day development, iterating on tests, fixing breakages fast.
One click kicks off an 11-step pipeline:
Uses specialized sub-agents: Orchestrator, Planner, Scout, Diver, Generator, and Healer. The agent pauses and asks for help only when it hits something it can't resolve on its own (missing settings, server offline). You can pause/resume, approve plans, and skip steps. Monitor progress in real-time via the Agent Monitoring.
How to use:
Best for: Onboarding a new project, generating full coverage from scratch, CI bootstrapping.
Import structured specifications and let AI generate tests from them:
How to use:
Organize tests into a nested hierarchy of functional areas:
Group tests into ordered suites for structured execution:
Every change to a test creates a new version with a change reason:
You can view the full history and restore any previous version.
Capture multiple labeled screenshots within a single test for multi-page flow testing. Each screenshot is compared independently against its own baseline.
When recording tests, Lastest automatically detects required browser capabilities:
These are automatically enabled in the corresponding Playwright settings.
Lastest supports two recording engines:
Configure the default engine in Settings Reference.
After you stop a recording, Lastest replays the captured steps against a fresh page and verifies every selector resolves before saving the test. Selectors that resolved during the original recording but fail on replay (because the page rebuilt the DOM, an element was conditionally rendered, etc.) are surfaced immediately so you can pick a sturdier selector or add an explicit wait. This sanity-check step prevents the classic "the test passed when I recorded it, then failed on the very next run" failure mode.
The freeform description columns on tests and functional areas have been replaced with two structured fields:
test_specs -- the user-facing description of what a test should do, used by AI when re-generating or healing the testagent_plan -- the Play Agent's planning output (functional area decomposition, test case rationale)Existing descriptions were backfilled into test_specs when the v1.12 schema migration ran. The Compose page and AI prompts read from these structured fields directly.