<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to AI Configuration</title><link>https://sourceforge.net/p/lastest/wiki/AI%2520Configuration/</link><description>Recent changes to AI Configuration</description><atom:link href="https://sourceforge.net/p/lastest/wiki/AI%20Configuration/feed" rel="self"/><language>en</language><lastBuildDate>Wed, 06 May 2026 09:06:53 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/lastest/wiki/AI%20Configuration/feed" rel="self" type="application/rss+xml"/><item><title>AI Configuration modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/AI%2520Configuration/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v3
+++ v4
@@ -109,3 +109,19 @@

 - Key dependencies with testing implications (100+ package database)

 This runs automatically during Play Agent and AI-assisted test generation.
+
+---
+
+## AI Test Data Variables
+
+The **Vars tab** on every test now hosts three kinds of data sources:
+
+| Type | Example | When to use |
+|------|---------|-------------|
+| **Static** | `email = "qa@example.com"` | Hard-coded test data |
+| **AI-generated (with presets)** | `customer_name`, `address`, `iban` -- AI fills realistic values per run | Avoid hard-coding when a class of value matters more than a specific value |
+| **Google Sheets-backed** | A column from a Sheet pinned to the test | Shared, collaboratively edited data |
+
+AI-generated vars use deterministic seeding when timestamp / random freezing is enabled, so they remain stable for diffing. Sheets vars and connected sources are surfaced inline on the test page so reviewers can see what data drove a given run.
+
+See [Google Sheets Integration] for the Sheets side of this.
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:53 -0000</pubDate><guid>https://sourceforge.net440d1f75fa055b2084dc4047f770969ecced6047</guid></item><item><title>AI Configuration modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/AI%2520Configuration/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v2
+++ v3
@@ -12,6 +12,7 @@
 | **Anthropic Direct** | Direct API calls to Anthropic | Per API usage |
 | **OpenRouter** | Access multiple models via OpenRouter | Per API usage |
 | **Claude Agent SDK** | Uses Anthropic's Agent SDK | Per API usage |
+| **OpenAI** | Direct API calls to OpenAI (GPT-4o, etc.) | Per API usage |
 | **Ollama** | Run local models with zero API cost | Free (local compute) |

 ---
@@ -35,6 +36,10 @@
 ### OpenRouter

 - Enter your OpenRouter API key
 - Select from available models
+
+### OpenAI
+- Enter your OpenAI API key
+- Select from available models (GPT-4o, etc.)

 ### Ollama

 - Install Ollama on your machine
@@ -86,5 +91,21 @@
 | Spec-driven generation | Yes |
 | AI diff analysis | Yes (optional) |
 | Route discovery | Yes |
+| AI failure triage | Yes (optional) |
+| Codebase intelligence | Yes |

 **Running tests never requires AI** -- it's pure Playwright execution.
+
+---
+
+## Codebase Intelligence
+
+When AI generates tests, Lastest can automatically detect your project context to enrich prompts:
+- Framework (Next.js, React, Vue, etc.)
+- CSS framework (Tailwind, CSS Modules, styled-components)
+- Auth mechanism (NextAuth, Clerk, Firebase, etc.)
+- State management (Redux, Zustand, etc.)
+- API layer (REST, GraphQL, tRPC)
+- Key dependencies with testing implications (100+ package database)
+
+This runs automatically during Play Agent and AI-assisted test generation.
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:52 -0000</pubDate><guid>https://sourceforge.netc908d80cc18563c8f75bbb7c1e1dc560bb5c1868</guid></item><item><title>AI Configuration modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/AI%2520Configuration/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v1
+++ v2
@@ -1,6 +1,6 @@
 # AI Configuration

-Lastest2 supports multiple AI providers for test generation, fixing, and diff analysis.
+Lastest supports multiple AI providers for test generation, fixing, and diff analysis.

 ---

@@ -26,7 +26,7 @@

 ### Claude CLI

 - Install the `claude` CLI: follow Anthropic's instructions
-- No API key needed in Lastest2 -- it uses your CLI authentication
+- No API key needed in Lastest -- it uses your CLI authentication

 ### Anthropic Direct

 - Enter your Anthropic API key
@@ -39,7 +39,7 @@
 ### Ollama
 - Install Ollama on your machine
 - Pull a model (e.g., `ollama pull llama3`)
-- Lastest2 connects to the local Ollama server automatically
+- Lastest connects to the local Ollama server automatically
 - Zero API cost -- runs entirely on your hardware

 ---
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:51 -0000</pubDate><guid>https://sourceforge.net700c3db7b7ebd2777430c19d532ac94e363dfd1f</guid></item><item><title>AI Configuration modified by Viktor Fási</title><link>https://sourceforge.net/p/lastest/wiki/AI%2520Configuration/</link><description>&lt;div class="markdown_content"&gt;&lt;h1 id="h-ai-configuration"&gt;AI Configuration&lt;/h1&gt;
&lt;p&gt;Lastest2 supports multiple AI providers for test generation, fixing, and diff analysis.&lt;/p&gt;
&lt;hr/&gt;
&lt;h2 id="h-ai-providers"&gt;AI Providers&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude CLI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Uses the &lt;code&gt;claude&lt;/code&gt; CLI tool installed on your machine&lt;/td&gt;
&lt;td&gt;Per your Anthropic plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Anthropic Direct&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Direct API calls to Anthropic&lt;/td&gt;
&lt;td&gt;Per API usage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenRouter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Access multiple models via OpenRouter&lt;/td&gt;
&lt;td&gt;Per API usage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Agent SDK&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Uses Anthropic's Agent SDK&lt;/td&gt;
&lt;td&gt;Per API usage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ollama&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Run local models with zero API cost&lt;/td&gt;
&lt;td&gt;Free (local compute)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr/&gt;
&lt;h2 id="h-setting-up-ai"&gt;Setting Up AI&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Settings &amp;gt; AI&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Select your &lt;strong&gt;test generation provider&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Enter the required API key or configuration&lt;/li&gt;
&lt;li&gt;Optionally select a separate &lt;strong&gt;diff analysis provider&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Settings auto-save after 500ms&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="h-claude-cli"&gt;Claude CLI&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Install the &lt;code&gt;claude&lt;/code&gt; CLI: follow Anthropic's instructions&lt;/li&gt;
&lt;li&gt;No API key needed in Lastest2 -- it uses your CLI authentication&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="h-anthropic-direct"&gt;Anthropic Direct&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Enter your Anthropic API key&lt;/li&gt;
&lt;li&gt;Select the model (defaults to latest Claude)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="h-openrouter"&gt;OpenRouter&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Enter your OpenRouter API key&lt;/li&gt;
&lt;li&gt;Select from available models&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="h-ollama"&gt;Ollama&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Install Ollama on your machine&lt;/li&gt;
&lt;li&gt;Pull a model (e.g., &lt;code&gt;ollama pull llama3&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Lastest2 connects to the local Ollama server automatically&lt;/li&gt;
&lt;li&gt;Zero API cost -- runs entirely on your hardware&lt;/li&gt;
&lt;/ul&gt;
&lt;hr/&gt;
&lt;h2 id="h-separate-diff-provider"&gt;Separate Diff Provider&lt;/h2&gt;
&lt;p&gt;You can use a different AI provider for diff analysis than for test generation. This is useful when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You want a cheaper/faster model for diff classification&lt;/li&gt;
&lt;li&gt;You want to use a local model (Ollama) for diffs but cloud AI for generation&lt;/li&gt;
&lt;li&gt;You want to keep diff analysis costs separate&lt;/li&gt;
&lt;/ul&gt;
&lt;hr/&gt;
&lt;h2 id="h-custom-instructions"&gt;Custom Instructions&lt;/h2&gt;
&lt;p&gt;Add custom instructions that are included in every AI prompt:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Guide the AI about your app's structure&lt;/li&gt;
&lt;li&gt;Specify preferred selectors or patterns&lt;/li&gt;
&lt;li&gt;Add context about your testing requirements&lt;/li&gt;
&lt;/ul&gt;
&lt;hr/&gt;
&lt;h2 id="h-ai-prompt-logs"&gt;AI Prompt Logs&lt;/h2&gt;
&lt;p&gt;Full audit trail of all AI requests and responses:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;View the last 50 AI interactions&lt;/li&gt;
&lt;li&gt;See exactly what was sent and received&lt;/li&gt;
&lt;li&gt;Debug AI-generated tests&lt;/li&gt;
&lt;li&gt;Available in &lt;strong&gt;Settings &amp;gt; AI Logs&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr/&gt;
&lt;h2 id="h-when-ai-is-used"&gt;When AI is Used&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;AI Required?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Recording tests&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Running tests&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Diffing screenshots&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Generating tests from URL&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fixing broken tests&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enhancing tests&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Play Agent autonomy&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Spec-driven generation&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI diff analysis&lt;/td&gt;
&lt;td&gt;Yes (optional)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Route discovery&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Running tests never requires AI&lt;/strong&gt; -- it's pure Playwright execution.&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Viktor Fási</dc:creator><pubDate>Wed, 06 May 2026 09:06:48 -0000</pubDate><guid>https://sourceforge.net12c25733b127719687f50e18787865f90ebb389d</guid></item></channel></rss>