Download Latest Version nanocoder v1.26.1 source code.tar.gz (3.5 MB)
Email in envelope

Get an email when there's a new version of Nanocoder

Home / v1.26.0
Name Modified Size InfoDownloads / Week
Parent folder
nanocoder v1.26.0 source code.tar.gz 2026-05-10 3.6 MB
nanocoder v1.26.0 source code.zip 2026-05-10 4.0 MB
README.md 2026-05-10 12.1 kB
Totals: 3 Items   7.5 MB 0

What's Changed

  • BREAKING: Removed redundant nanocoderTools.alwaysAllow setting. It duplicated the top-level alwaysAllow with no extra behaviour. Move any entries from nanocoderTools.alwaysAllow to the top-level alwaysAllow array in agents.config.json. The deprecated agents.config.example.json has also been removed.

  • Added nano mode — a third tool profile under /tune designed for the smallest open-weights models or low-end hardware running larger models locally. Strictly more aggressive than minimal: drops find_files, list_directory, and agent; trims CORE PRINCIPLES and CODING PRACTICES; uses ≤4-line TASK APPROACH, FILE OPERATIONS, and CONSTRAINTS sections; replaces the verbose SYSTEM INFORMATION block with a single-line ## SYSTEM line; and omits AGENTS.md from the prompt by default. Brings the system prompt from ~500–700 tokens (minimal) down to ~150–250 tokens. Includes a new "Nano (low-end hardware)" preset and an Include AGENTS.md toggle.

  • Added reasoning trace support. Reasoning content from models (Codex GPT-5, DeepSeek-R1-style, Anthropic extended thinking, etc.) now streams in real time, renders as a collapsible Thought block above the response, persists across history, and is included in logs. Toggle expansion with Ctrl+R, configure default expansion via the new Display Settings panel, and pin per-session via the tune expandedReasoning option. <think> tags are now stripped on the native tool-calling path so reasoning never leaks into rendered output. Thanks to @Daniel5055. Closes [#457].

  • Added refined non-interactive mode. A new --plain flag streams output suitable for CI pipelines, scripts, and pipes — no Ink rendering, no interactive prompts, deterministic exit codes, and proper handling of stdin/stdout. Includes a dedicated non-interactive-shell component and full test coverage for the plain transport.

  • Reworked VS Code extension integration. Removed the "Ask Nanocoder" command in favour of a more natural flow: highlighted text and the currently focused file are automatically pulled into Nanocoder as context. Backed by a rewritten extension protocol, a simplified extension entrypoint, a new useVSCodeServer hook, and tighter UI hooks for the development mode indicator and chat input.

  • Added /rename command for renaming the current chat session. Accessible from the chat input and reflected in the development mode indicator. Thanks to @lordoski.

  • Added defaultMode config option for interactive sessions. Set a starting mode (normal, auto-accept, yolo, plan) in agents.config.json instead of always launching in normal mode. Thanks to @lordoski.

  • Added custom system prompt support via agents.config.json. Define a project-level system prompt that replaces or augments the built-in prompt sections, with full validation and prompt-builder integration. Closes [#487].

  • Added per-provider and per-model context window overrides in agents.config.json via contextWindow and contextWindows[model]. New resolution order: session override (/context-max, --context-max) → provider model override → provider default → NANOCODER_CONTEXT_LIMIT → models.dev / Ollama fallback. /usage, status bar, auto-compact, and context reporting all use the active provider config consistently. Closes [#455].

  • Added subagent context window overrides so delegated agents can run with a different context limit than the main session. Thanks to @zerone0x.

  • Added disabledTools config option in agents.config.json to disable specific built-in tools project-wide. Honoured by both the main agent and subagents.

  • Added --trust-directory CLI flag to bypass the first-run directory trust prompt for automation. Yolo mode now also skips file path validators in line with its "auto-accept everything" semantics.

  • Added JSON tool fallback mode for non-tool-calling open-weights models (Qwen, Kimi, GLM). The tune menu now cycles through Native ON / OFF (XML) / OFF (JSON), with formatToolsForJSONPrompt() embedding literal JSON Schema and a JSON branch in the tool-call parser. Native path also gains JSON/XML hallucination recovery: parseToolCalls() is run against text content when the SDK reports no tool_calls, and stripEmbeddedToolCallText() removes echoed tool-call text so Ghost Echo no longer leaks into the UI or history. Reference [#500].

  • Added <function=...> format to the tool-call parser for models that emit OpenAI-style function tags.

  • Added Display Settings panel under /settings ("Tool Results and Thinking") with two new toggles: Show Thinking by default (Ctrl+R) and Expand Tool Results by default (Ctrl+O), persisted to nanocoder-preferences.json. Thanks to @cleyesode. Closes [#499].

  • Added 12+ new themes to the bundled theme set.

  • Removed the in-repo release content generator workflow. Release content is now generated from another repo via ContentForest.

  • Bumped CI and devcontainer toolchain to Node.js 22 + pnpm 11 across pr-checks.yml, release.yml, update-badges.yml, and .devcontainer/Dockerfile. engines.node raised to >=22, with a packageManager field (pnpm@11.x.x) added to package.json so Corepack pins the pnpm version automatically for both contributors and CI — no more drift, no per-workflow version: lines. CONTRIBUTING.md / docs/getting-started/installation.md / .devcontainer/README.md updated to match. Fixes the node:sqlite ERR_UNKNOWN_BUILTIN_MODULE crash hit by pnpm 11 on Node 20 runners.

  • Migrated pnpm config from package.json to pnpm-workspace.yaml for pnpm 11 compatibility: overrides, patchedDependencies, and the new allowBuilds block now live in the workspace file. While migrating, audited and removed all 16 existing version overrides — every one was redundant under current dependency resolution (each transitive resolved to a version already exceeding the override's floor), and removing them left pnpm audit output unchanged.

  • Major spring-clean refactor across the codebase. Extracted shared oauth-login command from the duplicated codex-login and copilot-login flows; introduced shared item-selector component used by model-selector and provider-selector; centralised tool-needs-approval logic; moved AI SDK error-handling specs into a dedicated subdirectory; tightened logging method factory; and trimmed redundant code paths in conversation-loop, subagent-executor, and the plain shell.

  • Refactored App.tsx and added missing test coverage for the app container and chat input components.

  • Refactored settings theme selector UX. Replaced the scrollable SelectInput with arrow-key navigation, real-time theme preview, and a "Theme Name [n/N]" position indicator. Preferences are only persisted on Enter.

  • Updated the system prompt to encourage subagent use more proactively for complex multi-step tasks.

  • Strip leading and trailing whitespace from assistant messages so blank lines and stray newlines from the model no longer push content around in the UI.

  • Fix: Empty model responses now trigger a capped auto-continue instead of crashing or burning tokens forever. Empty-response retry notices are coalesced into a single live counter so the chat history stays readable.

  • Fix: Surface context-size failures explicitly instead of silently retrying empty responses. Thanks to @zerone0x. Closes [#501].

  • Fix: Persist compressed messages across recursive conversation turns. Auto-compact now also resets the streaming token count after compression, with regression coverage to keep it stable. Thanks to @lordoski. Closes [#480].

  • Fix: Throw stream errors immediately during streaming instead of swallowing them and ending the response cleanly. Thanks to @alexbrjo.

  • Fix: Cap malformed-XML retries to prevent OOM when a model gets stuck emitting broken tool-call XML in a tight loop. Reference [#500].

  • Fix: Stale lastProvider after running the config wizard mid-session — the next /provider invocation picked up the old value. Includes regression coverage. Thanks to @electricwolfemarshmallowhypertext. Closes [#477].

  • Fix: Inaccurate tok/s reporting in the streaming status line.

  • Fix: File search, content search, and @mention autocomplete now work cross-platform. Replaced Unix-only find/grep/which shell-outs with pure Node.js implementations honouring .gitignore and DEFAULT_IGNORE_DIRS, restored the 30s search timeout via AbortSignal, and added Windows path normalisation. Thanks for the cross-platform report (#469).

  • Fix: Read file tool description corrected.

  • Fix: Type MCPTool.inputSchema as JSONSchema7 instead of any, with isPlainObject guard at the MCP protocol boundary so malformed server schemas resolve to undefined rather than flowing through the system as a trusted type. Thanks to @ragini-pandey. Closes [#467].

  • Fix: Apply caCertPath TLS configuration to Anthropic and Google providers (previously only OpenAI-compatible providers honoured it), with cert path validation. Companion fix for custom CA bundle support across all providers. Thanks to @zerone0x. Closes [#491] (#380).

  • Fix: Reject ambiguous provider names with case-insensitive duplicates (e.g. both Ollama and ollama defined). Provider names are now resolved case-insensitively throughout. Thanks to @zerone0x. Closes [#488].

  • Fix: Quoted custom command arguments (double, single, and backtick) are now parsed as a single parameter so multi-word values survive tokenisation. Closes [#478].

  • Fix: Missing margin on the live sub-agent progress component.

  • Fix: git_pr tool still required approval in yolo mode.

  • Fix: bash mode no longer passes the currently selected file when not relevant.

  • Fix: Incorrect passing of system prompt in some chat-handler paths.

  • Fix: Default missing subagent_type and description parameters on the agent tool to safe values to prevent a UI crash when the model omits them.

  • Fix: Model context limits failed to resolve for gemma4.

  • Fix: Scheduler — clear the perf buffer between jobs to avoid an undici fetch leak that accumulated across long-lived scheduler sessions. Refresh the base system prompt for each scheduler execution so dynamic system info (current date, etc.) stays fresh.

  • Fix: Reasoning trace follow-ups — restore post-loop flushBuffer safety net, persist reasoning on assistantMsg so it survives in history, default Codex reasoningSummary=auto and reasoningEffort=medium so GPT-5 reasoning actually renders, and resolve a system-prompt cache race where "No subagents available." was cached before the subagent loader finished.

  • Fix: Various test/CI breakages from merges and refactors.

  • Fix: Nix packaging — removed the unmaintained snowfall-lib dependency and improved overall packaging. Thanks to @PlasmaPower. Closes [#459].

  • Fix: Homebrew formula — switched the node dependency from node@20 to node@22 to prevent the webidl.util.markAsUncloneable is not a function startup crash. Thanks to @matthiasbolten. Closes [#468].

  • Fix: Repaired broken documentation references. Thanks to @breca.

  • Security: Address Semgrep findings flagged across the codebase.

  • Dependency updates: ai 6.0.116 → 6.0.174, @ai-sdk/openai 3.0.41 → 3.0.53, @ai-sdk/google 3.0.53 → 3.0.64, @ai-sdk/openai-compatible 2.0.35 → 2.0.41, undici 8.0.2 → 8.2.0, react 19.2.4 → 19.2.6, yaml 2.8.3 → 2.8.4, dotenv 17.3.1 → 17.4.2, @nanocollective/get-md 1.3.0 → 1.3.1, plus dev-dependency bumps for @biomejs/biome, knip, tsc-alias, @ava/typescript, @vscode/vsce, @types/vscode, eslint, and strip-ansi.

If there are any problems, feedback or thoughts please drop an issue or message us through Discord! Thank you for using Nanocoder.

Installation

:::bash
npm install -g @nanocollective/nanocoder

Usage

:::bash
nanocoder

Full Changelog: https://github.com/Nano-Collective/nanocoder/compare/v1.25.2...v1.26.0

Source: README.md, updated 2026-05-10