Menu

Tree [d7dfad] main /
 History

HTTPS access


File Date Author Commit
 .github 2025-12-25 Pedram Amini Pedram Amini [fe0927] fix: Linux ARM64 build failing due to x86-only fpm
 build 2025-12-18 Pedram Amini Pedram Amini [9d5b03] # CHANGES
 docs 2025-12-02 Pedram Amini Pedram Amini [8094ad] style: Add left padding to Made with Maestro badge
 e2e 2025-12-14 Pedram Amini Pedram Amini [ccea26] MAESTRO: add Auto Run session switching E2E tes...
 scripts 2025-12-24 Pedram Amini Pedram Amini [93268d] OAuth enabled but no valid token found. Startin...
 src 2025-12-25 Pedram Amini Pedram Amini [d7dfad] ## CHANGES
 .gitignore 2025-12-24 Pedram Amini Pedram Amini [93268d] OAuth enabled but no valid token found. Startin...
 AGENT_SUPPORT.md 2025-12-18 Pedram Amini Pedram Amini [142c02] ## CHANGES
 ARCHITECTURE.md 2025-12-21 Pedram Amini Pedram Amini [b72ae7] MAESTRO: audit useModalManager - remove dead ho...
 BACKBURNER.md 2025-11-26 Pedram Amini Pedram Amini [c30c05] feat: add usage stats tracking, UI polish, and ...
 CLAUDE.md 2025-12-23 Pedram Amini Pedram Amini [a8edad] feat: add ESLint with TypeScript/React plugins ...
 CONTRIBUTING.md 2025-12-25 Pedram Amini Pedram Amini [2a53db] ## CHANGES
 LICENSE 2025-12-04 Pedram Amini Pedram Amini [4ab152] licensing
 README.md 2025-12-25 Pedram Amini Pedram Amini [e7b17d] ## CHANGES
 SECURITY.md 2025-12-18 Pedram Amini Pedram Amini [ddb827] I'm ready to analyze Github project changes and...
 THEMES.md 2025-12-09 Pedram Amini Pedram Amini [bb5e13] Reformat THEMES.md for improved readability
 eslint.config.mjs 2025-12-23 Pedram Amini Pedram Amini [5d91f3] tests pass
 package-lock.json 2025-12-23 Pedram Amini Pedram Amini [fa3707] MAESTRO: Add keyboard mastery gamification backend
 package.json 2025-12-24 Pedram Amini Pedram Amini [93268d] OAuth enabled but no valid token found. Startin...
 playwright.config.ts 2025-12-14 Pedram Amini Pedram Amini [71c2d0] MAESTRO: add E2E test infrastructure and Auto R...
 postcss.config.mjs 2025-11-24 Pedram Amini Pedram Amini [ca85ff] UX prototype complete
 tailwind.config.mjs 2025-11-27 Pedram Amini Pedram Amini [559515] MAESTRO: Create separate Vite config for web in...
 tsconfig.cli.json 2025-12-23 Pedram Amini Pedram Amini [5d91f3] tests pass
 tsconfig.json 2025-11-27 Pedram Amini Pedram Amini [2ce759] MAESTRO: Create ThemeProvider component for web...
 tsconfig.lint.json 2025-12-20 Pedram Amini Pedram Amini [ef20df] added debug support package production
 tsconfig.main.json 2025-11-24 Pedram Amini Pedram Amini [ca85ff] UX prototype complete
 vite.config.mts 2025-12-20 Pedram Amini Pedram Amini [cebaf5] wip
 vite.config.web.mts 2025-12-01 Pedram Amini Pedram Amini [3f1788] perf: Optimize AI input responsiveness and prod...
 vitest.config.mts 2025-12-20 Pedram Amini Pedram Amini [ef20df] added debug support package production
 vitest.integration.config.ts 2025-12-23 Pedram Amini Pedram Amini [5d91dd] ## CHANGES
 vitest.performance.config.mts 2025-12-23 Pedram Amini Pedram Amini [718281] MAESTRO: Add performance tests for thinking str...

Read Me

Maestro

Made with Maestro
Discord

Run AI coding agents autonomously for days.

Maestro is a cross-platform desktop app for orchestrating your fleet of AI agents and projects. It's a high-velocity solution for hackers who are juggling multiple projects in parallel. Designed for power users who live on the keyboard and rarely touch the mouse.

Collaborate with AI to create detailed specification documents, then let Auto Run execute them automatically, each task in a fresh session with clean context. Allowing for long-running unattended sessions, my current record is nearly 24 hours of continuous runtime.

Run multiple agents in parallel with a Linear/Superhuman-level responsive interface. Currently supporting Claude Code, OpenAI Codex, and OpenCode with plans for additional agentic coding tools (Aider, Gemini CLI, Qwen3 Coder) based on user demand.

Installation

Download

Download the latest release for your platform from the Releases page:

  • macOS: .dmg or .zip
  • Windows: .exe installer
  • Linux: .AppImage, .deb, or .rpm
  • Upgrading: Simply replace the old binary with the new one. All your data (sessions, settings, playbooks, history) persists in your config directory.

Requirements

  • At least one supported AI coding agent installed and authenticated:
  • Claude Code - Anthropic's AI coding assistant
  • OpenAI Codex - OpenAI's coding agent
  • OpenCode - Open-source AI coding assistant
  • Git (optional, for git-aware features)

Features

Power Features

  • 🌳 Git Worktrees - Run AI agents in parallel on isolated branches. Create worktree sub-agents from the git branch menu, each operating in their own directory. Work interactively in the main repo while sub-agents process tasks independently—then create PRs with one click. True parallel development without conflicts.
  • 🤖 Auto Run & Playbooks - File-system-based task runner that batch-processes markdown checklists through AI agents. Create playbooks for repeatable workflows, run in loops, and track progress with full history. Each task gets its own AI session for clean conversation context.
  • 💬 Group Chat - Coordinate multiple AI agents in a single conversation. A moderator AI orchestrates discussions, routing questions to the right agents and synthesizing their responses for cross-project questions and architecture discussions.
  • 🌐 Mobile Remote Control - Built-in web server with QR code access. Monitor and control all your agents from your phone. Supports local network access and remote tunneling via Cloudflare for access from anywhere.
  • 💻 Command Line Interface - Full CLI (maestro-cli) for headless operation. List agents/groups, run playbooks from cron jobs or CI/CD pipelines, with human-readable or JSONL output for scripting.
  • 🚀 Multi-Instance Management - Run unlimited agents and terminal sessions in parallel. Each agent has its own workspace, conversation history, and isolated context.
  • 📬 Message Queueing - Queue messages while AI is busy; they're sent automatically when the agent becomes ready. Never lose a thought.

Core Features

  • 🔄 Dual-Mode Sessions - Each agent has both an AI Terminal and Command Terminal. Switch seamlessly between AI conversation and shell commands with Cmd+J.
  • ⌨️ Keyboard-First Design - Full keyboard control with customizable shortcuts and mastery tracking that rewards you for leveling up. Cmd+K quick actions, rapid agent switching, and focus management designed for flow state.
  • 📋 Session Discovery - Automatically discovers and imports all Claude Code sessions, including conversations from before Maestro was installed. Browse, search, star, rename, and resume any session.
  • 🔀 Git Integration - Automatic repo detection, branch display, diff viewer, commit logs, and git-aware file completion. Work with git without leaving the app.
  • 📁 File Explorer - Browse project files with syntax highlighting, markdown preview, and image viewing. Reference files in prompts with @ mentions.
  • 🔍 Powerful Output Filtering - Search and filter AI output with include/exclude modes, regex support, and per-response local filters.
  • Slash Commands - Extensible command system with autocomplete. Create custom commands with template variables for your workflows.
  • 💾 Draft Auto-Save - Never lose work. Drafts are automatically saved and restored per session.
  • 🔊 Speakable Notifications - Audio alerts with text-to-speech announcements when agents complete tasks.
  • 🎨 Beautiful Themes - 12 themes including Dracula, Monokai, Nord, Tokyo Night, GitHub Light, and more.
  • 💰 Cost Tracking - Real-time token usage and cost tracking per session and globally.
  • 🏆 Achievements - Level up from Apprentice to Titan of the Baton based on cumulative Auto Run time. 11 conductor-themed ranks to unlock.

Note: Maestro supports Claude Code, OpenAI Codex, and OpenCode. Support for additional agents (Aider, Gemini CLI, Qwen3 Coder) may be added in future releases based on community demand.

Spec-Driven Workflow

Maestro enables a specification-first approach to AI-assisted development. Instead of ad-hoc prompting, you collaboratively build detailed specs with the AI, then execute them systematically:

┌─────────────────────────────────────────────────────────────────────┐
│  1. PLAN          2. SPECIFY         3. EXECUTE        4. REFINE    │
│  ─────────        ──────────         ─────────         ─────────    │
│  Discuss the      Create markdown    Auto Run works    Review       │
│  feature with     docs with task     through tasks,    results,     │
│  the AI agent     checklists in      fresh session     update specs │
│                   your Auto Run      per task          and repeat   │
│                   folder                                            │
└─────────────────────────────────────────────────────────────────────┘

Why this works:
- Deliberate planning — Conversation forces you to think through requirements before coding
- Documented specs — Your markdown files become living documentation
- Clean execution — Each task runs in isolation with no context bleed
- Iterative refinement — Review, adjust specs, re-run—specs evolve with your understanding

Example workflow:

  1. Plan: In the AI Terminal, discuss your feature: "I want to add user authentication with OAuth support"
  2. Specify: Ask the AI to help create a spec: "Create a markdown checklist for implementing this feature"
  3. Save: Copy the spec to your Auto Run folder (or have the AI write it directly)
  4. Execute: Switch to Auto Run tab, select the doc, click Run—Maestro handles the rest
  5. Review: Check the History tab for results, refine specs as needed

This approach mirrors methodologies like Spec-Kit, but with a graphical interface, real-time AI collaboration, and multi-agent parallelism.

Key Concepts

Concept Description
Agent A workspace tied to a project directory and AI provider (Claude Code, Codex, or OpenCode). Contains one Command Terminal and one AI Terminal with full conversation history.
Group Organizational container for agents. Group by project, client, or workflow.
Group Chat Multi-agent conversation coordinated by a moderator. Ask questions across multiple agents and get synthesized answers.
Git Worktree An isolated working directory linked to a separate branch. Worktree sub-agents appear nested under their parent in the session list and can create PRs.
AI Terminal The conversation interface with your AI agent. Supports @ file mentions, slash commands, and image attachments.
Command Terminal A PTY shell session for running commands directly. Tab completion for files, git branches, and command history.
Session Explorer Browse all past conversations for an agent. Star, rename, search, and resume any previous session.
Auto Run Automated task runner that processes markdown checklists. Spawns fresh AI sessions per task.
Playbook A saved Auto Run configuration with document order, options, and settings for repeatable batch workflows.
History Timestamped log of all actions (user commands, AI responses, Auto Run completions) with session links.
Remote Control Web interface for mobile access. Local network or remote via Cloudflare tunnel.
CLI Headless command-line tool for scripting, automation, and CI/CD integration.

UI Overview

Maestro features a three-panel layout:

  • Left Panel - Agent list with grouping, filtering, search, bookmarks, and drag-and-drop organization
  • Main Panel - Center workspace with two modes per agent:
  • AI Terminal - Converse with your AI agent (Claude Code, Codex, or OpenCode). Supports multiple tabs/sessions, @ file mentions, image attachments, slash commands, and draft auto-save.
  • Command Terminal - PTY shell with tab completion for files, branches, tags, and command history.
  • Views: Session Explorer, File Preview, Git Diffs, Git Logs
  • Right Panel - Three tabs: File Explorer, History Viewer, and Auto Run

Agent Status Indicators

Each session shows a color-coded status indicator:

  • 🟢 Green - Ready and waiting
  • 🟡 Yellow - Agent is thinking
  • 🔴 Red - No connection with agent
  • 🟠 Pulsing Orange - Attempting to establish connection
  • 🔴 Red Badge - Unread messages (small red dot overlapping top-right of status indicator, iPhone-style)

Screenshots

All these screenshots were captured in the them "Pedurple". For screenshots of other themes, see THEMES.md. Also note that these screenshots are probably dated as the project is evolving rapidly.

Main Screen

image

Command Interpreter (with collapsed left panel)

image

Git Logs and Diff Viewer

image
image

File Viewer

image

CMD+K and Shortcuts Galore

image
image
image
image

Themes and Achievements

image
image

Session Tracking, Starring, Labeling, and Recall

image

AutoRuns with Change History Tracking

image
image

Group Chat

image

Web Interface / Remote Control

Chat

IMG_0163

Groups / Sessions

IMG_0162

History

IMG_0164

Keyboard Shortcuts

Global Shortcuts

Action macOS Windows/Linux
Quick Actions Cmd+K Ctrl+K
Toggle Sidebar Cmd+B Ctrl+B
Toggle Right Panel Cmd+\ Ctrl+\
New Agent Cmd+N Ctrl+N
Kill Agent Cmd+Shift+Backspace Ctrl+Shift+Backspace
Move Agent to Group Cmd+Shift+M Ctrl+Shift+M
Previous Agent Cmd+[ Ctrl+[
Next Agent Cmd+] Ctrl+]
Jump to Agent (1-9, 0=10th) Opt+Cmd+NUMBER Alt+Ctrl+NUMBER
Switch AI/Command Terminal Cmd+J Ctrl+J
Show Shortcuts Help Cmd+/ Ctrl+/
Open Settings Cmd+, Ctrl+,
View All Agent Sessions Cmd+Shift+L Ctrl+Shift+L
Jump to Bottom Cmd+Shift+J Ctrl+Shift+J
Cycle Focus Areas Tab Tab
Cycle Focus Backwards Shift+Tab Shift+Tab

Panel Shortcuts

Action macOS Windows/Linux
Go to Files Tab Cmd+Shift+F Ctrl+Shift+F
Go to History Tab Cmd+Shift+H Ctrl+Shift+H
Go to Auto Run Tab Cmd+Shift+1 Ctrl+Shift+1
Toggle Markdown Raw/Preview Cmd+E Ctrl+E
Insert Checkbox (Auto Run) Cmd+L Ctrl+L

Input & Output

Action Key
Send Message Enter or Cmd+Enter (configurable in Settings)
Multiline Input Shift+Enter
Navigate Command History Up Arrow while in input
Slash Commands Type / to open autocomplete
Focus Output Esc while in input
Focus Input Esc while in output
Open Output Search Cmd+F while in output
Scroll Output Up/Down Arrow while in output
Page Up/Down Alt+Up/Down Arrow while in output
Jump to Top/Bottom Cmd+Up/Down Arrow while in output

Tab Completion (Command Terminal)

The Command Terminal provides intelligent tab completion for faster command entry:

Action Key
Open Tab Completion Tab (when there's input text)
Navigate Suggestions Up/Down Arrow
Select Suggestion Enter
Cycle Filter Types Tab (while dropdown is open, git repos only)
Cycle Filter Backwards Shift+Tab (while dropdown is open)
Close Dropdown Esc

Completion Sources:
- History - Previous shell commands from your session
- Files/Folders - Files and directories in your current working directory
- Git Branches - Local and remote branches (git repos only)
- Git Tags - Available tags (git repos only)

In git repositories, filter buttons appear in the dropdown header allowing you to filter by type (All, History, Branches, Tags, Files). Use Tab/Shift+Tab to cycle through filters or click directly.

@ File Mentions (AI Terminal)

In AI mode, use @ to reference files in your prompts:

Action Key
Open File Picker Type @ followed by a search term
Navigate Suggestions Up/Down Arrow
Select File Tab or Enter
Close Dropdown Esc

Example: Type @readme to see matching files, then select to insert the file reference into your prompt. The AI will have context about the referenced file.

Action Key
Navigate Agents Up/Down Arrow while in sidebar
Select Agent Enter while in sidebar
Open Session Filter Cmd+F while in sidebar
Navigate Files Up/Down Arrow while in file tree
Open File Tree Filter Cmd+F while in file tree
Open File Preview Enter on selected file
Close Preview/Filter/Modal Esc

File Preview

Action macOS Windows/Linux
Copy File Path Cmd+P Ctrl+P
Open Search Cmd+F Ctrl+F
Scroll Up/Down Arrow Up/Down Arrow
Close Esc Esc

Most shortcuts are customizable in Settings > Shortcuts

Keyboard Mastery

Maestro tracks your keyboard shortcut usage and rewards you for becoming a power user! As you use more shortcuts, you'll level up through the mastery ranks:

Level Threshold Name Description
0 0% Beginner Just starting out
1 25% Student Learning the basics
2 50% Performer Getting comfortable
3 75% Virtuoso Almost there
4 100% Keyboard Maestro Complete mastery

When you reach a new level, you'll see a celebration with confetti! Your progress is tracked in the Shortcuts Help modal (Cmd+/ or Ctrl+/), which shows your current mastery percentage and hints at shortcuts you haven't tried yet.

Why keyboard shortcuts matter: Using shortcuts keeps you in flow state, reduces context switching, and dramatically speeds up your workflow. Maestro is designed for keyboard-first operation—the less you reach for the mouse, the faster you'll work.

Slash Commands

Maestro includes an extensible slash command system with autocomplete. Type / in the input area to open the autocomplete menu, use arrow keys to navigate, and press Tab or Enter to select.

Custom AI Commands

Create your own slash commands in Settings > Custom AI Commands. Each command has a trigger (e.g., /deploy) and a prompt that gets sent to the AI agent.

Commands support template variables that are automatically substituted at runtime:

Agent Variables

Variable Description
{{AGENT_NAME}} Agent name
{{AGENT_PATH}} Agent home directory path (full path to project)
{{AGENT_GROUP}} Agent's group name (if grouped)
{{AGENT_SESSION_ID}} Agent session ID (for conversation continuity)
{{TAB_NAME}} Custom tab name (alias: SESSION_NAME)
{{TOOL_TYPE}} Agent type (claude-code, codex, opencode)

Path Variables

Variable Description
{{CWD}} Current working directory
{{AUTORUN_FOLDER}} Auto Run documents folder path

Auto Run Variables

Variable Description
{{DOCUMENT_NAME}} Current Auto Run document name (without .md)
{{DOCUMENT_PATH}} Full path to current Auto Run document
{{LOOP_NUMBER}} Current loop iteration (starts at 1)

Date/Time Variables

Variable Description
{{DATE}} Current date (YYYY-MM-DD)
{{TIME}} Current time (HH:MM:SS)
{{DATETIME}} Full datetime (YYYY-MM-DD HH:MM:SS)
{{TIMESTAMP}} Unix timestamp in milliseconds
{{DATE_SHORT}} Short date (MM/DD/YY)
{{TIME_SHORT}} Short time (HH:MM)
{{YEAR}} Current year (YYYY)
{{MONTH}} Current month (01-12)
{{DAY}} Current day (01-31)
{{WEEKDAY}} Day of week (Monday, Tuesday, etc.)

Git & Context Variables

Variable Description
{{GIT_BRANCH}} Current git branch name (requires git repo)
{{IS_GIT_REPO}} "true" or "false"
{{CONTEXT_USAGE}} Current context window usage percentage

Example: A custom /standup command with prompt:

It's {{WEEKDAY}}, {{DATE}}. I'm on branch {{GIT_BRANCH}} at {{AGENT_PATH}}.
Summarize what I worked on yesterday and suggest priorities for today.

Git Worktrees

Git worktrees enable true parallel development by letting you run multiple AI agents on separate branches simultaneously. Each worktree operates in its own isolated directory, so there's no risk of conflicts between parallel work streams.

Creating a Worktree Sub-Agent

  1. In the session list, hover over an agent in a git repository
  2. Click the git branch indicator (shows current branch name)
  3. In the overlay menu, click "Create Worktree Sub-Agent"
  4. Configure the worktree:
  5. Worktree Directory — Base folder where worktrees are created
  6. Branch Name — Name for the new branch (becomes the subdirectory name)
  7. Create PR on Completion — Auto-open a pull request when done
  8. Target Branch — Base branch for the PR (defaults to main/master)

How Worktree Sessions Work

  • Nested Display — Worktree sub-agents appear indented under their parent session in the left sidebar
  • Branch Icon — A git branch icon indicates worktree sessions
  • Collapse/Expand — Click the chevron on a parent session to show/hide its worktree children
  • Independent Operation — Each worktree session has its own working directory, conversation history, and state

Creating Pull Requests

When you're done with work in a worktree:

  1. Right-click the worktree session → Create Pull Request, or
  2. Press Cmd+K with the worktree active → search "Create Pull Request"

The PR modal shows:
- Source branch (your worktree branch)
- Target branch (configurable)
- Auto-generated title and description based on your work

Requirements: GitHub CLI (gh) must be installed and authenticated. Maestro will detect if it's missing and show installation instructions.

Use Cases

Scenario How Worktrees Help
Background Auto Run Run Auto Run in a worktree while working interactively in the main repo
Feature Branches Spin up a sub-agent for each feature branch
Code Review Create a worktree to review and iterate on a PR without switching branches
Parallel Experiments Try different approaches simultaneously without git stash/pop

Tips

  • Name branches descriptively — The branch name becomes the worktree directory name
  • Use a dedicated worktree folder — Keep all worktrees in one place (e.g., ~/worktrees/)
  • Clean up when done — Delete worktree sessions after merging PRs to avoid clutter

Auto Run

Auto Run is a file-system-based document runner that lets you batch-process tasks using AI agents. Select a folder containing markdown documents with task checkboxes, and Maestro will work through them one by one, spawning a fresh AI session for each task.

Setting Up Auto Run

  1. Navigate to the Auto Run tab in the right panel (Cmd+Shift+1)
  2. Select a folder containing your markdown task documents
  3. Each .md file becomes a selectable document

Creating Tasks

Use markdown checkboxes in your documents:

# Feature Implementation Plan

- [ ] Implement user authentication
- [ ] Add unit tests for the login flow
- [ ] Update API documentation

Tip: Press Cmd+L (Mac) or Ctrl+L (Windows/Linux) to quickly insert a new checkbox at your cursor position.

Running Single Documents

  1. Select a document from the dropdown
  2. Click the Run button (or the ▶ icon)
  3. Customize the agent prompt if needed, then click Go

Multi-Document Batch Runs

Auto Run supports running multiple documents in sequence:

  1. Click Run to open the Batch Runner Modal
  2. Click + Add Docs to add more documents to the queue
  3. Drag to reorder documents as needed
  4. Configure options per document:
  5. Reset on Completion - Uncheck all boxes when document completes (for repeatable tasks)
  6. Duplicate - Add the same document multiple times
  7. Enable Loop Mode to cycle back to the first document after completing the last
  8. Click Go to start the batch run

Playbooks

Save your batch configurations for reuse:

  1. Configure your documents, order, and options
  2. Click Save as Playbook and enter a name
  3. Load saved playbooks from the Load Playbook dropdown
  4. Update or discard changes to loaded playbooks

Progress Tracking

The runner will:
- Process tasks serially from top to bottom
- Skip documents with no unchecked tasks
- Show progress: "Document X of Y" and "Task X of Y"
- Mark tasks as complete (- [x]) when done
- Log each completion to the History panel

Session Isolation

Each task executes in a completely fresh AI session with its own unique session ID. This provides:

  • Clean context - No conversation history bleeding between tasks
  • Predictable behavior - Tasks in looping playbooks execute identically each iteration
  • Independent execution - The agent approaches each task without memory of previous work

This isolation is critical for playbooks with Reset on Completion documents that loop indefinitely. Without it, the AI might "remember" completing a task and skip re-execution on subsequent loops.

Environment Variables {#environment-variables}

Maestro sets environment variables that your agent hooks can use to customize behavior:

Variable Value Description
MAESTRO_SESSION_RESUMED 1 Set when resuming an existing session (not set for new sessions)

Example: Conditional Hook Execution

Since Maestro spawns a new agent process for each message (batch mode), agent "session start" hooks will run on every turn. Use MAESTRO_SESSION_RESUMED to skip hooks on resumed sessions:

# In your agent's session start hook
[ "$MAESTRO_SESSION_RESUMED" = "1" ] && exit 0
# ... rest of your hook logic for new sessions only

This works with any agent provider (Claude Code, Codex, OpenCode) since the environment variable is set by Maestro before spawning the agent process.

History & Tracking

Each completed task is logged to the History panel with:
- AUTO label indicating automated execution
- Session ID pill (clickable to jump to that AI conversation)
- Summary of what the agent accomplished
- Full response viewable by clicking the entry

Keyboard navigation in History:
- Up/Down Arrow - Navigate entries
- Enter - View full response
- Esc - Close detail view and return to list

Auto-Save

Documents auto-save after 5 seconds of inactivity, and immediately when switching documents. Full undo/redo support with Cmd+Z / Cmd+Shift+Z.

Image Support

Paste images directly into your documents. Images are saved to an images/ subfolder with relative paths for portability.

Stopping the Runner

Click the Stop button at any time. The runner will:
- Complete the current task before stopping
- Preserve all completed work
- Allow you to resume later by clicking Run again

Parallel Auto Runs

Auto Run can execute in parallel across different agents without conflicts—each agent works in its own project directory, so there's no risk of clobbering each other's work.

Same project, parallel work: To run multiple Auto Runs in the same repository simultaneously, create worktree sub-agents from the git branch menu (see Git Worktrees). Each worktree operates in an isolated directory with its own branch, enabling true parallel task execution on the same codebase.

Group Chat

Group Chat lets you coordinate multiple AI agents in a single conversation. A moderator AI orchestrates the discussion, routing questions to the right agents and synthesizing their responses.

When to Use Group Chat

  • Cross-project questions: "How does the frontend authentication relate to the backend API?"
  • Architecture discussions: Get perspectives from agents with different codebase contexts
  • Comparative analysis: "Compare the testing approach in these three repositories"
  • Knowledge synthesis: Combine expertise from specialized agents

How It Works

  1. Create a Group Chat from the sidebar menu
  2. Add participants by @mentioning agent names (e.g., @Frontend, @Backend)
  3. Send your question - the moderator receives it first
  4. Moderator coordinates - routes to relevant agents via @mentions
  5. Agents respond - each agent works in their own project context
  6. Moderator synthesizes - combines responses into a coherent answer

The Moderator's Role

The moderator is an AI that controls the conversation flow:

  • Direct answers: For simple questions, the moderator responds directly
  • Delegation: For complex questions, @mentions the appropriate agents
  • Follow-up: If agent responses are incomplete, keeps asking until satisfied
  • Synthesis: Combines multiple agent perspectives into a final answer

The moderator won't return to you until your question is properly answered—it will keep going back to agents as many times as needed.

Example Conversation

You: "How does @Maestro relate to @RunMaestro.ai?"

Moderator: "Let me gather information from both projects.
            @Maestro @RunMaestro.ai - please explain your role in the ecosystem."

[Agents work in parallel...]

Maestro: "I'm the core Electron desktop app for AI orchestration..."

RunMaestro.ai: "I'm the marketing website and leaderboard..."

Moderator: "Here's how they relate:
            - Maestro is the desktop app (the product)
            - RunMaestro.ai is the website (discovery and community)
            - They share theme definitions for visual consistency

            Next steps: Would you like details on any specific integration?"

Tips for Effective Group Chats

  • Name agents descriptively - Agent names appear in the chat, so "Frontend-React" is clearer than "Agent1"
  • Be specific in questions - The more context you provide, the better the moderator can route
  • @mention explicitly - You can direct questions to specific agents: "What does @Backend think?"
  • Let the moderator work - It may take multiple rounds for complex questions

Achievements

Maestro features a conductor-themed achievement system that tracks your cumulative Auto Run time. The focus is simple: longest run wins. As you accumulate Auto Run hours, you level up through 11 ranks inspired by the hierarchy of orchestral conductors.

Conductor Ranks

Level Rank Time Required Example Conductor
1 Apprentice Conductor 15 minutes Gustavo Dudamel (early career)
2 Assistant Conductor 1 hour Marin Alsop
3 Associate Conductor 8 hours Yannick Nézet-Séguin
4 Resident Conductor 24 hours Jaap van Zweden
5 Principal Guest Conductor 1 week Esa-Pekka Salonen
6 Chief Conductor 30 days Andris Nelsons
7 Music Director 3 months Sir Simon Rattle
8 Maestro Emeritus 6 months Bernard Haitink
9 World Maestro 1 year Kirill Petrenko
10 Grand Maestro 5 years Riccardo Muti
11 Titan of the Baton 10 years Leonard Bernstein

Reaching the Top

Since Auto Runs can execute in parallel across multiple Maestro sessions, achieving Titan of the Baton (Level 11) is technically feasible in less than 10 calendar years. Run 10 agents simultaneously with worktrees and you could theoretically hit that milestone in about a year of real time.

But let's be real—getting to Level 11 is going to take some serious hacking. You'll need a well-orchestrated fleet of agents running around the clock, carefully crafted playbooks that loop indefinitely, and the infrastructure to keep it all humming. It's the ultimate test of your Maestro skills.

The achievement panel shows your current rank, progress to the next level, and total accumulated time. Each rank includes flavor text and information about a legendary conductor who exemplifies that level of mastery.

Context Management

Context management lets you combine or transfer conversation history between sessions and agents, enabling powerful workflows where you can:

  • Compact & continue — Compress your context to stay within token limits while preserving key information
  • Merge sessions — Combine context from multiple conversations into one
  • Transfer to other agents — Send your context to a different AI agent (e.g., Claude Code → Codex)

Compact & Continue

When your conversation approaches context limits, you can compress it while preserving essential information:

  1. Right-click a tab → "Context: Compact", or use Command Palette (Cmd+K / Ctrl+K) → "Context: Compact"
  2. The AI compacts the conversation, extracting key decisions, code changes, and context
  3. A new tab opens with the compressed context, ready to continue working

When to use:
- The context warning sash appears (yellow at 60%, red at 80% usage)
- You want to continue a long conversation without losing important context
- You need to free up context space for new tasks

What gets preserved:
- Key decisions and their rationale
- Code changes and file modifications
- Important technical details and constraints
- Current task state and next steps

Merging Sessions

Combine context from multiple sessions or tabs into one:

  1. Right-click a tab → "Merge With...", or use Command Palette (Cmd+K / Ctrl+K) → "Merge with another session"
  2. Search for or select the target session/tab
  3. Review the merge preview showing estimated token count
  4. Click "Merge Contexts"

The merged context creates a new tab in the target session with conversation history from both sources. Use this to consolidate related conversations or bring context from an older session into a current one.

What gets merged:
- Full conversation history (user messages and AI responses)
- Token estimates are shown before merge to help you stay within context limits

Tips:
- You can merge tabs within the same session or across different sessions
- Large merges (100k+ tokens) will show a warning but still proceed
- Self-merge (same tab to itself) is prevented

Sending to Another Agent

Transfer your context to a different AI agent:

  1. Right-click a tab → "Send to Agent...", or use Command Palette (Cmd+K / Ctrl+K) → "Send to another agent"
  2. Select the target agent (only available/installed agents are shown)
  3. Optionally enable context grooming to optimize the context for the target agent
  4. A new session opens with the transferred context

Context Grooming:
When transferring between different agent types, the context can be automatically "groomed" to:
- Remove agent-specific artifacts and formatting
- Condense verbose output while preserving key information
- Optimize for the target agent's capabilities

Grooming is enabled by default but can be skipped for faster transfers.

Use Cases:
- Start a task in Claude Code, then hand off to Codex for a different perspective
- Transfer a debugging session to an agent with different tool access
- Move context to an agent pointing at a different project directory

Command Line Interface

Maestro includes a CLI tool (maestro-cli) for managing agents and running playbooks from the command line, cron jobs, or CI/CD pipelines. The CLI requires Node.js (which you already have if you're using Claude Code).

Installation

The CLI is bundled with Maestro as a JavaScript file. Create a shell wrapper to run it:

# macOS (after installing Maestro.app)
printf '#!/bin/bash\nnode "/Applications/Maestro.app/Contents/Resources/maestro-cli.js" "$@"\n' | sudo tee /usr/local/bin/maestro-cli && sudo chmod +x /usr/local/bin/maestro-cli

# Linux (deb/rpm installs to /opt)
printf '#!/bin/bash\nnode "/opt/Maestro/resources/maestro-cli.js" "$@"\n' | sudo tee /usr/local/bin/maestro-cli && sudo chmod +x /usr/local/bin/maestro-cli

# Windows (PowerShell as Administrator) - create a batch file
@"
@echo off
node "%ProgramFiles%\Maestro\resources\maestro-cli.js" %*
"@ | Out-File -FilePath "$env:ProgramFiles\Maestro\maestro-cli.cmd" -Encoding ASCII

Alternatively, run directly with Node.js:

node "/Applications/Maestro.app/Contents/Resources/maestro-cli.js" list groups

Usage

# List all groups
maestro-cli list groups

# List all agents
maestro-cli list agents
maestro-cli list agents --group <group-id>

# Show agent details (history, usage stats, cost)
maestro-cli show agent <agent-id>

# List all playbooks (or filter by agent)
maestro-cli list playbooks
maestro-cli list playbooks --agent <agent-id>

# Show playbook details
maestro-cli show playbook <playbook-id>

# Run a playbook
maestro-cli playbook <playbook-id>

# Dry run (shows what would be executed)
maestro-cli playbook <playbook-id> --dry-run

# Run without writing to history
maestro-cli playbook <playbook-id> --no-history

# Wait for agent if busy, with verbose output
maestro-cli playbook <playbook-id> --wait --verbose

# Debug mode for troubleshooting
maestro-cli playbook <playbook-id> --debug

JSON Output

By default, commands output human-readable formatted text. Use --json for machine-parseable JSONL output:

# Human-readable output (default)
maestro-cli list groups
GROUPS (2)

  🎨  Frontend
      group-abc123
  ⚙️  Backend
      group-def456

# JSON output for scripting
maestro-cli list groups --json
{"type":"group","id":"group-abc123","name":"Frontend","emoji":"🎨","timestamp":...}
{"type":"group","id":"group-def456","name":"Backend","emoji":"⚙️","timestamp":...}

# Running a playbook with JSON streams events
maestro-cli playbook <playbook-id> --json
{"type":"start","timestamp":...,"playbook":{...}}
{"type":"document_start","timestamp":...,"document":"tasks.md","taskCount":5}
{"type":"task_start","timestamp":...,"taskIndex":0}
{"type":"task_complete","timestamp":...,"success":true,"summary":"...","elapsedMs":8000}
{"type":"document_complete","timestamp":...,"tasksCompleted":5}
{"type":"complete","timestamp":...,"totalTasksCompleted":5,"totalElapsedMs":60000}

Scheduling with Cron

# Run a playbook every hour (use --json for log parsing)
0 * * * * /usr/local/bin/maestro-cli playbook <playbook-id> --json >> /var/log/maestro.jsonl 2>&1

Requirements

  • At least one AI agent CLI must be installed and in PATH (Claude Code, Codex, or OpenCode)
  • Maestro config files must exist (created automatically when you use the GUI)

Provider Nuances

Each AI agent has unique capabilities and limitations. Maestro adapts its UI based on what each provider supports.

Claude Code

Feature Support
Image attachments ✅ New and resumed sessions
Session resume --resume flag
Read-only mode --permission-mode plan
Slash commands /help, /compact, etc.
Cost tracking ✅ Full cost breakdown
Model selection --model flag (via custom CLI args)

OpenAI Codex

Feature Support
Image attachments ⚠️ New sessions only (not on resume)
Session resume exec resume <id>
Read-only mode --sandbox read-only
Slash commands ⚠️ Interactive TUI only (not in exec mode)
Cost tracking ❌ Token counts only (no pricing)
Model selection -m, --model flag

Notes:
- Codex's resume subcommand doesn't accept the -i/--image flag. Images can only be attached when starting a new session. Maestro hides the attach image button when resuming Codex sessions.
- Codex has slash commands (/compact, /undo, /diff, etc.) but they only work in interactive TUI mode, not in exec mode which Maestro uses.

OpenCode

Feature Support
Image attachments ✅ New and resumed sessions
Session resume --session flag
Read-only mode --agent plan
Slash commands ❌ Not investigated
Cost tracking ✅ Per-step costs
Model selection --model provider/model

Note: OpenCode uses the run subcommand which auto-approves all permissions (similar to Codex's YOLO mode).

Configuration

Settings are stored in:

  • macOS: ~/Library/Application Support/maestro/
  • Windows: %APPDATA%/maestro/
  • Linux: ~/.config/maestro/

Cross-Device Sync (Beta)

Maestro can sync settings, sessions, and groups across multiple devices by storing them in a cloud-synced folder like iCloud Drive, Dropbox, or OneDrive.

Setup:

  1. Open Settings (Cmd+,) → General tab
  2. Scroll to Storage Location
  3. Click Choose Folder... and select a synced folder:
  4. iCloud Drive: ~/Library/Mobile Documents/com~apple~CloudDocs/Maestro
  5. Dropbox: ~/Dropbox/Maestro
  6. OneDrive: ~/OneDrive/Maestro
  7. Maestro will migrate your existing settings to the new location
  8. Restart Maestro for changes to take effect
  9. Repeat on your other devices, selecting the same synced folder

What syncs:
- Settings and preferences
- Session configurations
- Groups and organization
- Agent configurations
- Session origins and metadata

What stays local:
- Window size and position (device-specific)
- The bootstrap file that points to your sync location

Important limitations:
- Single-device usage: Only run Maestro on one device at a time. Running simultaneously on multiple devices can cause sync conflicts where the last write wins.
- No conflict resolution: If settings are modified on two devices before syncing completes, one set of changes will be lost.
- Restart required: Changes to storage location require an app restart to take effect.

To reset to the default location, click Use Default in the Storage Location settings.

Remote Access

Maestro includes a built-in web server for mobile remote control:

  1. Automatic Security: Web server runs on a random port with an auto-generated security token embedded in the URL
  2. QR Code Access: Scan a QR code to connect instantly from your phone
  3. Global Access: All sessions are accessible when the web interface is enabled - the security token protects access
  4. Remote Tunneling: Access Maestro from anywhere via Cloudflare tunnel (requires cloudflared CLI)

Mobile Web Interface

The mobile web interface provides:
- Real-time session monitoring and command input
- Device color scheme preference support (light/dark mode)
- Connection status indicator with automatic reconnection
- Offline queue for commands typed while disconnected
- Swipe gestures for common actions
- Quick actions menu for the send button

Local Access (Same Network)

  1. Click the "OFFLINE" button in the header to enable the web interface
  2. The button changes to "LIVE" and shows a QR code overlay
  3. Scan the QR code or copy the secure URL to access from your phone on the same network

Remote Access (Outside Your Network)

To access Maestro from outside your local network (e.g., on mobile data or from another location):

  1. Install cloudflared: brew install cloudflared (macOS) or download for other platforms
  2. Enable the web interface (OFFLINE → LIVE)
  3. Toggle "Remote Access" in the Live overlay
  4. A secure Cloudflare tunnel URL will be generated
  5. Use the Local/Remote pill selector to switch between QR codes
  6. The tunnel stays active as long as Maestro is running - no time limits, no account required

Troubleshooting & Support

Debug Package

If you encounter deep-seated issues that are difficult to diagnose, Maestro can generate a Debug Package—a compressed bundle of diagnostic information that you can safely share when reporting bugs.

To create a Debug Package:
1. Press Cmd+K (Mac) or Ctrl+K (Windows/Linux) to open Quick Actions
2. Search for "Create Debug Package"
3. Choose a save location for the .zip file
4. Attach the file to your GitHub issue

What's Included

The debug package collects metadata and configuration—never your conversations or sensitive data:

File Contents
system-info.json OS, CPU, memory, Electron/Node versions, app uptime
settings.json App preferences with sensitive values redacted
agents.json Agent configurations, availability, and capability flags
external-tools.json Shell, git, GitHub CLI, and cloudflared availability
sessions.json Session metadata (names, states, tab counts—no conversations)
processes.json Active process information
logs.json Recent system log entries
errors.json Current error states and recent error events
storage-info.json Storage paths and sizes

Privacy Protections

The debug package is designed to be safe to share publicly:

  • API keys and tokens — Replaced with [REDACTED]
  • Passwords and secrets — Never included
  • Conversation content — Excluded entirely (no AI responses, no user messages)
  • File contents — Not included from your projects
  • Custom prompts — Not included (may contain sensitive context)
  • File paths — Sanitized to replace your username with ~
  • Environment variables — Only counts shown, not values (may contain secrets)
  • Custom agent arguments — Only [SET] or [NOT SET] shown, not actual values

Example path sanitization:
- Before: /Users/johndoe/Projects/MyApp
- After: ~/Projects/MyApp

Getting Help

Contributing

See CONTRIBUTING.md for development setup, architecture details, and contribution guidelines.

Contributors

License

AGPL-3.0 License