Aardvark is an autonomous security-research agent powered by GPT-5, designed to act like a human security researcher, continuously analyzing source-code repositories, developing threat models, scanning commits for vulnerabilities, validating exploitability in sandboxed environments, and proposing targeted patches for human review. Unlike traditional tools that rely purely on fuzzing or software-composition analysis, Aardvark uses an LLM-based reasoning pipeline to interpret code behavior and integrate directly into existing developer workflows (e.g., GitHub, code-review pipelines, Codex for patch generation). It supports historical scanning of entire repositories at initial connection, commit-level scanning thereafter, automatic patch generation and verification, and human-auditable annotations for each finding. Early internal benchmarks at OpenAI show detection recall of 92% in repositories seeded with known or synthetic vulnerabilities.