Menu

Creating Tests

Viktor Fási

Creating Tests

Lastest offers three ways to create tests, from fully manual to fully autonomous.


1. AI-Free (Manual Recording)

Open the recorder, click through your app, hit stop. Lastest captures every interaction and generates deterministic Playwright code -- no AI involved, no API keys needed. You own the test code and can edit it by hand.

How to use:

  1. Navigate to your repository
  2. Click Record
  3. Enter the URL of your app
  4. Interact with your app -- click, type, scroll, navigate
  5. Click Stop when done
  6. Review the generated Playwright code
  7. Save the test

Best for: Teams that don't want AI, air-gapped environments, simple flows.


2. AI-Assisted (Human-in-the-Loop)

AI generates, fixes, or enhances tests -- but you review and approve before anything is saved.

Capabilities:

  • Feed it a URL and get a test back
  • Import OpenAPI specs, user stories, or markdown files -- AI extracts test cases
  • When a test breaks, AI proposes a fix and you decide whether to accept it
  • Enhance existing tests with additional coverage

How to use:

  1. Make sure you have an AI provider configured in AI Configuration
  2. Use "Generate with AI" and provide a URL or description
  3. Review the generated test code
  4. Edit if needed, then save

Enhance with AI (live MCP wiring)

The "Enhance with AI" flow now streams progress through the Lastest MCP server: as AI plans new test cases the real test names appear in the UI before the code is generated, and selectors are validated against the live page via MCP tooling rather than guessed. This drastically reduces "test renamed in code but still labelled untitled-3 in the UI" drift and surfaces selector failures while AI is still iterating.

Best for: Day-to-day development, iterating on tests, fixing breakages fast.


3. Full Autonomous (Play Agent)

One click kicks off an 11-step pipeline:

  1. Check settings and AI provider
  2. Select repository
  3. Environment setup
  4. Scan routes and apply testing template
  5. Plan functional areas
  6. Review plan (human approval checkpoint)
  7. Generate tests
  8. Run the tests
  9. Fix failures (up to 3 attempts per test)
  10. Re-run fixed tests
  11. Report results

Uses specialized sub-agents: Orchestrator, Planner, Scout, Diver, Generator, and Healer. The agent pauses and asks for help only when it hits something it can't resolve on its own (missing settings, server offline). You can pause/resume, approve plans, and skip steps. Monitor progress in real-time via the Agent Monitoring.

How to use:

  1. Configure an AI provider in Settings
  2. Navigate to your repository
  3. Click Play Agent
  4. Let it run -- monitor progress in the UI
  5. Review results when complete

Best for: Onboarding a new project, generating full coverage from scratch, CI bootstrapping.


Spec-Driven Test Generation

Import structured specifications and let AI generate tests from them:

  • OpenAPI specs -- import your API spec and generate tests for each endpoint
  • User stories -- paste or import user stories in any format
  • Markdown files -- import documentation or requirements

How to use:

  1. Navigate to your repository's test list
  2. Click Import Spec
  3. Paste or upload your spec/stories
  4. AI extracts test cases and generates Playwright code
  5. Review and save individual tests

Test Organization

Functional Areas

Organize tests into a nested hierarchy of functional areas:

  • Create parent and child areas (e.g., "Auth" > "Login", "Auth" > "Registration")
  • Drag and drop to reorder
  • Tests within an area run together

Test Suites

Group tests into ordered suites for structured execution:

  • Create named suites
  • Add tests in a specific order
  • Run entire suites at once

Test Versioning

Every change to a test creates a new version with a change reason:

  • Manual edit
  • AI fix
  • AI enhance
  • Restored (from a previous version)

You can view the full history and restore any previous version.


Multi-Step Screenshots

Capture multiple labeled screenshots within a single test for multi-page flow testing. Each screenshot is compared independently against its own baseline.


Auto-Detect Capabilities

When recording tests, Lastest automatically detects required browser capabilities:

  • File upload
  • Clipboard access
  • Downloads
  • Network interception

These are automatically enabled in the corresponding Playwright settings.


Recording Engines

Lastest supports two recording engines:

  • Custom Recorder -- Lastest's built-in recorder with enhanced capabilities
  • Playwright Inspector -- The official Playwright recording tool

Configure the default engine in Settings Reference.


Recording Verification

After you stop a recording, Lastest replays the captured steps against a fresh page and verifies every selector resolves before saving the test. Selectors that resolved during the original recording but fail on replay (because the page rebuilt the DOM, an element was conditionally rendered, etc.) are surfaced immediately so you can pick a sturdier selector or add an explicit wait. This sanity-check step prevents the classic "the test passed when I recorded it, then failed on the very next run" failure mode.


Test Specs & Agent Plans

The freeform description columns on tests and functional areas have been replaced with two structured fields:

  • test_specs -- the user-facing description of what a test should do, used by AI when re-generating or healing the test
  • agent_plan -- the Play Agent's planning output (functional area decomposition, test case rationale)

Existing descriptions were backfilled into test_specs when the v1.12 schema migration ran. The Compose page and AI prompts read from these structured fields directly.


Related

Wiki: Agent Monitoring
Wiki: Home
Wiki: _Sidebar

MongoDB Logo MongoDB