<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to Creating Tests</title><link>https://sourceforge.net/p/lastest/wiki/Creating%2520Tests/</link><description>Recent changes to Creating Tests</description><atom:link href="https://sourceforge.net/p/lastest/wiki/Creating%20Tests/feed" rel="self"/><language>en</language><lastBuildDate>Wed, 06 May 2026 09:06:53 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/lastest/wiki/Creating%20Tests/feed" rel="self" type="application/rss+xml"/><item><title>Creating Tests modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/Creating%2520Tests/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v3
+++ v4
@@ -36,6 +36,10 @@

 2. Use "Generate with AI" and provide a URL or description
 3. Review the generated test code
 4. Edit if needed, then save
+
+### Enhance with AI (live MCP wiring)
+
+The "Enhance with AI" flow now streams progress through the Lastest MCP server: as AI plans new test cases the real test names appear in the UI before the code is generated, and selectors are validated against the live page via MCP tooling rather than guessed. This drastically reduces "test renamed in code but still labelled `untitled-3` in the UI" drift and surfaces selector failures while AI is still iterating.

 **Best for:** Day-to-day development, iterating on tests, fixing breakages fast.

@@ -140,3 +144,20 @@

 - **Playwright Inspector** -- The official Playwright recording tool

 Configure the default engine in [Settings Reference](Settings &amp;gt; Playwright).
+
+---
+
+## Recording Verification
+
+After you stop a recording, Lastest replays the captured steps against a fresh page and verifies every selector resolves before saving the test. Selectors that resolved during the original recording but fail on replay (because the page rebuilt the DOM, an element was conditionally rendered, etc.) are surfaced immediately so you can pick a sturdier selector or add an explicit wait. This sanity-check step prevents the classic "the test passed when I recorded it, then failed on the very next run" failure mode.
+
+---
+
+## Test Specs &amp;amp; Agent Plans
+
+The freeform `description` columns on tests and functional areas have been replaced with two structured fields:
+
+- **`test_specs`** -- the user-facing description of what a test should do, used by AI when re-generating or healing the test
+- **`agent_plan`** -- the Play Agent's planning output (functional area decomposition, test case rationale)
+
+Existing descriptions were backfilled into `test_specs` when the v1.12 schema migration ran. The Compose page and AI prompts read from these structured fields directly.
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:53 -0000</pubDate><guid>https://sourceforge.net78a367d34532f4e69884930e3cd2306191d435db</guid></item><item><title>Creating Tests modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/Creating%2520Tests/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v2
+++ v3
@@ -43,17 +43,21 @@

 ## 3. Full Autonomous (Play Agent)

-One click kicks off a 9-step pipeline:
+One click kicks off an 11-step pipeline:

-1. Scan your repo for routes
-2. Classify your app type
-3. Generate tests for discovered routes
-4. Run the tests
-5. Fix failures (up to 3 attempts per test)
-6. Re-run fixed tests
-7. Report results
+1. Check settings and AI provider
+2. Select repository
+3. Environment setup
+4. Scan routes and apply testing template
+5. Plan functional areas
+6. Review plan (human approval checkpoint)
+7. Generate tests
+8. Run the tests
+9. Fix failures (up to 3 attempts per test)
+10. Re-run fixed tests
+11. Report results

-The agent pauses and asks for help only when it hits something it can't resolve on its own (missing settings, server offline). You resume and it picks up where it left off.
+Uses specialized sub-agents: Orchestrator, Planner, Scout, Diver, Generator, and Healer. The agent pauses and asks for help only when it hits something it can't resolve on its own (missing settings, server offline). You can pause/resume, approve plans, and skip steps. Monitor progress in real-time via the [Agent Monitoring](activity feed).

 **How to use:**

 1. Configure an AI provider in Settings
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:52 -0000</pubDate><guid>https://sourceforge.netfe813b67e96c46110e413c0d8349e94e80e1c8ea</guid></item><item><title>Creating Tests modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/Creating%2520Tests/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v1
+++ v2
@@ -1,12 +1,12 @@
 # Creating Tests

-Lastest2 offers three ways to create tests, from fully manual to fully autonomous.
+Lastest offers three ways to create tests, from fully manual to fully autonomous.

 ---

 ## 1. AI-Free (Manual Recording)

-Open the recorder, click through your app, hit stop. Lastest2 captures every interaction and generates deterministic Playwright code -- no AI involved, no API keys needed. You own the test code and can edit it by hand.
+Open the recorder, click through your app, hit stop. Lastest captures every interaction and generates deterministic Playwright code -- no AI involved, no API keys needed. You own the test code and can edit it by hand.

 **How to use:**

 1. Navigate to your repository
@@ -119,7 +119,7 @@

 ## Auto-Detect Capabilities

-When recording tests, Lastest2 automatically detects required browser capabilities:
+When recording tests, Lastest automatically detects required browser capabilities:

 - File upload
 - Clipboard access
 - Downloads
@@ -131,8 +131,8 @@

 ## Recording Engines

-Lastest2 supports two recording engines:
-- **Custom Recorder** -- Lastest2's built-in recorder with enhanced capabilities
+Lastest supports two recording engines:
+- **Custom Recorder** -- Lastest's built-in recorder with enhanced capabilities

 - **Playwright Inspector** -- The official Playwright recording tool

 Configure the default engine in [Settings Reference](Settings &amp;gt; Playwright).
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:51 -0000</pubDate><guid>https://sourceforge.net5bb0fc4196d65affca2aa49e5e09acef2290ba0e</guid></item><item><title>Creating Tests modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/Creating%2520Tests/</link><description>&lt;div class="markdown_content"&gt;&lt;h1 id="h-creating-tests"&gt;Creating Tests&lt;/h1&gt;
&lt;p&gt;Lastest2 offers three ways to create tests, from fully manual to fully autonomous.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-1-ai-free-manual-recording"&gt;1. AI-Free (Manual Recording)&lt;/h2&gt;
&lt;p&gt;Open the recorder, click through your app, hit stop. Lastest2 captures every interaction and generates deterministic Playwright code -- no AI involved, no API keys needed. You own the test code and can edit it by hand.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How to use:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to your repository&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Record&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Enter the URL of your app&lt;/li&gt;
&lt;li&gt;Interact with your app -- click, type, scroll, navigate&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Stop&lt;/strong&gt; when done&lt;/li&gt;
&lt;li&gt;Review the generated Playwright code&lt;/li&gt;
&lt;li&gt;Save the test&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Teams that don't want AI, air-gapped environments, simple flows.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-2-ai-assisted-human-in-the-loop"&gt;2. AI-Assisted (Human-in-the-Loop)&lt;/h2&gt;
&lt;p&gt;AI generates, fixes, or enhances tests -- but you review and approve before anything is saved.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Feed it a URL and get a test back&lt;/li&gt;
&lt;li&gt;Import OpenAPI specs, user stories, or markdown files -- AI extracts test cases&lt;/li&gt;
&lt;li&gt;When a test breaks, AI proposes a fix and you decide whether to accept it&lt;/li&gt;
&lt;li&gt;Enhance existing tests with additional coverage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How to use:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Make sure you have an AI provider configured in &lt;a href="./Settings"&gt;AI Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Use "Generate with AI" and provide a URL or description&lt;/li&gt;
&lt;li&gt;Review the generated test code&lt;/li&gt;
&lt;li&gt;Edit if needed, then save&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Day-to-day development, iterating on tests, fixing breakages fast.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-3-full-autonomous-play-agent"&gt;3. Full Autonomous (Play Agent)&lt;/h2&gt;
&lt;p&gt;One click kicks off a 9-step pipeline:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Scan your repo for routes&lt;/li&gt;
&lt;li&gt;Classify your app type&lt;/li&gt;
&lt;li&gt;Generate tests for discovered routes&lt;/li&gt;
&lt;li&gt;Run the tests&lt;/li&gt;
&lt;li&gt;Fix failures (up to 3 attempts per test)&lt;/li&gt;
&lt;li&gt;Re-run fixed tests&lt;/li&gt;
&lt;li&gt;Report results&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The agent pauses and asks for help only when it hits something it can't resolve on its own (missing settings, server offline). You resume and it picks up where it left off.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How to use:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Configure an AI provider in Settings&lt;/li&gt;
&lt;li&gt;Navigate to your repository&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Play Agent&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Let it run -- monitor progress in the UI&lt;/li&gt;
&lt;li&gt;Review results when complete&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Onboarding a new project, generating full coverage from scratch, CI bootstrapping.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-spec-driven-test-generation"&gt;Spec-Driven Test Generation&lt;/h2&gt;
&lt;p&gt;Import structured specifications and let AI generate tests from them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;OpenAPI specs&lt;/strong&gt; -- import your API spec and generate tests for each endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;User stories&lt;/strong&gt; -- paste or import user stories in any format&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Markdown files&lt;/strong&gt; -- import documentation or requirements&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How to use:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to your repository's test list&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Import Spec&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Paste or upload your spec/stories&lt;/li&gt;
&lt;li&gt;AI extracts test cases and generates Playwright code&lt;/li&gt;
&lt;li&gt;Review and save individual tests&lt;/li&gt;
&lt;/ol&gt;
&lt;hr/&gt;
&lt;h2 id="h-test-organization"&gt;Test Organization&lt;/h2&gt;
&lt;h3 id="h-functional-areas"&gt;Functional Areas&lt;/h3&gt;
&lt;p&gt;Organize tests into a nested hierarchy of functional areas:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create parent and child areas (e.g., "Auth" &amp;gt; "Login", "Auth" &amp;gt; "Registration")&lt;/li&gt;
&lt;li&gt;Drag and drop to reorder&lt;/li&gt;
&lt;li&gt;Tests within an area run together&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="h-test-suites"&gt;Test Suites&lt;/h3&gt;
&lt;p&gt;Group tests into ordered suites for structured execution:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create named suites&lt;/li&gt;
&lt;li&gt;Add tests in a specific order&lt;/li&gt;
&lt;li&gt;Run entire suites at once&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="h-test-versioning"&gt;Test Versioning&lt;/h3&gt;
&lt;p&gt;Every change to a test creates a new version with a change reason:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Manual edit&lt;/li&gt;
&lt;li&gt;AI fix&lt;/li&gt;
&lt;li&gt;AI enhance&lt;/li&gt;
&lt;li&gt;Restored (from a previous version)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can view the full history and restore any previous version.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-multi-step-screenshots"&gt;Multi-Step Screenshots&lt;/h2&gt;
&lt;p&gt;Capture multiple labeled screenshots within a single test for multi-page flow testing. Each screenshot is compared independently against its own baseline.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-auto-detect-capabilities"&gt;Auto-Detect Capabilities&lt;/h2&gt;
&lt;p&gt;When recording tests, Lastest2 automatically detects required browser capabilities:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;File upload&lt;/li&gt;
&lt;li&gt;Clipboard access&lt;/li&gt;
&lt;li&gt;Downloads&lt;/li&gt;
&lt;li&gt;Network interception&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are automatically enabled in the corresponding Playwright settings.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-recording-engines"&gt;Recording Engines&lt;/h2&gt;
&lt;p&gt;Lastest2 supports two recording engines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Custom Recorder&lt;/strong&gt; -- Lastest2's built-in recorder with enhanced capabilities&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Playwright Inspector&lt;/strong&gt; -- The official Playwright recording tool&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Configure the default engine in &lt;a href="./Settings%20&amp;gt;%20Playwright"&gt;Settings Reference&lt;/a&gt;.&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:48 -0000</pubDate><guid>https://sourceforge.netb40caad18f8e68789c77acb8f4261fb01e759995</guid></item></channel></rss>