Menu

Home

Viktor Fási

Lastest

Free, open-source visual regression testing with AI-generated tests.

Record it. Test it. Ship it.


What is Lastest?

Lastest is a self-hosted visual regression testing platform that records your tests, writes them with AI, runs them anywhere, and fixes them when they break -- all in one tool.

1. Point it at your app
2. Record your user flows (point-and-click, no code)
3. AI generates resilient test code with multi-selector fallback
4. Run on remote runners or in an embedded browser container (EB setup required)
5. Screenshots compared with 3 diff engines (pixelmatch, SSIM, Butteraugli)
6. Review and approve visual changes -- or let AI auto-classify them

When self-hosted, your data stays on your server and your screenshots never leave your infra.


Core Flow

Development & Review Flow

Create Tests        Run Tests           Review
  Record manually     Embedded            Approve/
  AI-assisted         Browser or          Reject
  Play Agent auto     remote / CI         changes

  One-time cost       Zero AI per run     New baseline saved
  (AI optional)       (pure Playwright)
  1. Create: Build tests your way -- record manually, let AI generate from a URL or spec, or let the Play Agent autonomously scan your entire app.
  2. Run: Execute tests in an Embedded Browser pod (default), on remote runners, or in CI/CD. No AI needed -- pure Playwright execution. Local Playwright on the host is no longer supported; the EB stack is required.
  3. Compare: New screenshots are diffed against baselines using your chosen engine. Optional DOM-diff fallback catches structural changes when pixel comparison is inconclusive.
  4. Review: Visual diffs are classified. Approve intentional changes -- they become the new baseline.
  5. Fix: When tests break, AI can propose fixes or the Play Agent can fix and re-run autonomously.

Build Once, Run Forever

  • First run: screenshot becomes the baseline
  • Every run after: new screenshot is SHA256-hashed -- if it matches the baseline, instant pass. If it differs, the diff engine runs and you review the change.
  • AI costs are one-time: AI is only used during test creation and fixing. Running tests uses zero AI.
  • No per-screenshot pricing on self-hosted: every run is unlimited regardless of volume.


Related

Wiki: AI Configuration
Wiki: API Tokens
Wiki: Agent Monitoring
Wiki: Bug Reports
Wiki: CI CD Integration
Wiki: Creating Tests
Wiki: Custom Webhooks
Wiki: Docker Deployment
Wiki: Environment Variables
Wiki: Gamification
Wiki: Getting Started
Wiki: GitHub Integration
Wiki: GitLab Integration
Wiki: Google Sheets Integration
Wiki: MCP Server
Wiki: Remote Runners
Wiki: Running Tests
Wiki: Scheduled Runs
Wiki: Settings Reference
Wiki: Test Migration
Wiki: VSCode Extension API
Wiki: Visual Diffing
Wiki: _Sidebar

MongoDB Logo MongoDB