Best Artificial Intelligence Software for Linux - Page 18

Compare the Top Artificial Intelligence Software for Linux as of May 2026 - Page 18

  • 1
    Snippets AI

    Snippets AI

    Snippets AI

    Snippets AI is an AI-prompt and snippet-management platform where users can save, adapt, and reuse prompts and code snippets across multiple large-language models from one centralized workspace. It provides keyboard shortcuts to insert prompts into any app without copy-and-paste, ensuring consistency and speed. Teams can collaborate in shared workspaces with version control, syntax highlighting, voice input, and public or private sharing of libraries, ensuring everyone stays aligned on content, templates, or code patterns. Snippets AI also offers developer-friendly REST APIs to programmatically manage prompts, code, workspaces and integrations. Community features include public libraries of curated prompts and a “Share & Earn” model that pays creators for prompt views. Enterprise-grade security includes fine-grained permissions, audit logs, and dedicated policies for data protection.
    Starting Price: $5.99 per month
  • 2
    DeepSeek-V3.2
    DeepSeek-V3.2 is a next-generation open large language model designed for efficient reasoning, complex problem solving, and advanced agentic behavior. It introduces DeepSeek Sparse Attention (DSA), a long-context attention mechanism that dramatically reduces computation while preserving performance. The model is trained with a scalable reinforcement learning framework, allowing it to achieve results competitive with GPT-5 and even surpass it in its Speciale variant. DeepSeek-V3.2 also includes a large-scale agent task synthesis pipeline that generates structured reasoning and tool-use demonstrations for post-training. The model features an updated chat template with new tool-calling logic and the optional developer role for agent workflows. With gold-medal performance in the IMO and IOI 2025 competitions, DeepSeek-V3.2 demonstrates elite reasoning capabilities for both research and applied AI scenarios.
    Starting Price: Free
  • 3
    DeepSeek-V3.2-Speciale
    DeepSeek-V3.2-Speciale is a high-compute variant of the DeepSeek-V3.2 model, created specifically for deep reasoning and advanced problem-solving tasks. It builds on DeepSeek Sparse Attention (DSA), a custom long-context attention mechanism that reduces computational overhead while preserving high performance. Through a large-scale reinforcement learning framework and extensive post-training compute, the Speciale variant surpasses GPT-5 on reasoning benchmarks and matches the capabilities of Gemini-3.0-Pro. The model achieved gold-medal performance in the International Mathematical Olympiad (IMO) 2025 and International Olympiad in Informatics (IOI) 2025. DeepSeek-V3.2-Speciale does not support tool-calling, making it purely optimized for uninterrupted reasoning and analytical accuracy. Released under the MIT license, it provides researchers and developers an open, state-of-the-art model focused entirely on high-precision reasoning.
    Starting Price: Free
  • 4
    OpenAGI

    OpenAGI

    OpenAGI

    OpenAGI is a developer-focused framework designed to help teams build autonomous, human-like AI agents capable of planning, reasoning, and executing tasks independently. It bridges the gap between traditional LLM applications and fully autonomous agents by offering tools for decision-making, continual learning, and long-term task execution. The platform allows developers to create specialized agents for real-world use cases across industries such as education, finance, healthcare, and software development. With its flexible architecture, OpenAGI supports sequential, parallel, and dynamic communication patterns between agents. Developers can choose automated configuration generation or manually tailor every detail for complete customization. OpenAGI represents an early but significant step toward making powerful, adaptive agent technology accessible to everyone.
    Starting Price: Free
  • 5
    Lux

    Lux

    OpenAGI Foundation

    Lux is a powerful computer-use AI platform that enables agents to operate software just like a human user—clicking, typing, navigating, and completing tasks across any interface. It offers three execution modes—Tasker, Actor, and Thinker—giving developers the ability to choose between step-by-step precision, near-instant task execution, or long-form reasoning for complex workflows. Lux can autonomously perform actions such as crawling Amazon data, running automated QA tests, or extracting insights from Nasdaq’s insider activity pages. The platform makes it possible to prototype and deploy real computer-use agents in as little as 20 minutes using developer-friendly SDKs and templates. Its agents are built to understand vague goals, execute long-running operations, and interact naturally with human-facing software instead of relying solely on APIs. Lux represents a new paradigm where AI goes beyond reasoning and content generation to directly operate computers at scale.
    Starting Price: Free
  • 6
    Devstral 2

    Devstral 2

    Mistral AI

    Devstral 2 is a next-generation, open source agentic AI model tailored for software engineering: it doesn’t just suggest code snippets, it understands and acts across entire codebases, enabling multi-file edits, bug fixes, refactoring, dependency resolution, and context-aware code generation. The Devstral 2 family includes a large 123-billion-parameter model as well as a smaller 24-billion-parameter variant (“Devstral Small 2”), giving teams flexibility; the larger model excels in heavy-duty coding tasks requiring deep context, while the smaller one can run on more modest hardware. With a vast context window of up to 256 K tokens, Devstral 2 can reason across extensive repositories, track project history, and maintain a consistent understanding of lengthy files, an advantage for complex, real-world projects. The CLI tracks project metadata, Git statuses, and directory structure to give the model context, making “vibe-coding” more powerful.
    Starting Price: Free
  • 7
    Devstral Small 2
    Devstral Small 2 is the compact, 24 billion-parameter variant of the new coding-focused model family from Mistral AI, released under the permissive Apache 2.0 license to enable both local deployment and API use. Alongside its larger sibling (Devstral 2), this model brings “agentic coding” capabilities to environments with modest compute: it supports a large 256K-token context window, enabling it to understand and make changes across entire codebases. On the standard code-generation benchmark (SWE-Bench Verified), Devstral Small 2 scores around 68.0%, placing it among open-weight models many times its size. Because of its reduced size and efficient design, Devstral Small 2 can run on a single GPU or even CPU-only setups, making it practical for developers, small teams, or hobbyists without access to data-center hardware. Despite its compact footprint, Devstral Small 2 retains key capabilities of larger models; it can reason across multiple files and track dependencies.
    Starting Price: Free
  • 8
    Mistral Vibe

    Mistral Vibe

    Mistral AI

    Mistral Vibe is an agentic coding platform developed by Mistral AI that helps developers write, test, and deploy software more efficiently. The system uses specialized AI coding models that understand the full context of a project’s codebase to provide intelligent suggestions and automation. Developers can interact with Vibe through the terminal, IDE extensions, or automated agents that work asynchronously. The platform supports tasks such as code generation, debugging, documentation creation, and test generation. Vibe can analyze entire repositories to refactor code, translate legacy systems to modern stacks, and optimize performance. It integrates with development tools like GitHub, GitLab, and project management platforms to provide contextual insights during development. By combining autonomous coding agents with deep project awareness, Mistral Vibe enables teams to accelerate development while maintaining code quality.
    Starting Price: Free
  • 9
    DeepCoder

    DeepCoder

    Agentica Project

    DeepCoder is a fully open source code-reasoning and generation model released by Agentica Project in collaboration with Together AI. It is fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, achieving a 60.6% accuracy on LiveCodeBench (representing an 8% improvement over the base), a performance level that matches that of proprietary models such as o3-mini (2025-01-031 Low) and o1 while using only 14 billion parameters. It was trained over 2.5 weeks on 32 H100 GPUs with a curated dataset of roughly 24,000 coding problems drawn from verified sources (including TACO-Verified, PrimeIntellect SYNTHETIC-1, and LiveCodeBench submissions), each problem requiring a verifiable solution and at least five unit tests to ensure reliability for RL training. To handle long-range context, DeepCoder employs techniques such as iterative context lengthening and overlong filtering.
    Starting Price: Free
  • 10
    DeepSWE

    DeepSWE

    Agentica Project

    DeepSWE is a fully open source, state-of-the-art coding agent built on top of the Qwen3-32B foundation model and trained exclusively via reinforcement learning (RL), without supervised finetuning or distillation from proprietary models. It is developed using rLLM, Agentica’s open source RL framework for language agents. DeepSWE operates as an agent; it interacts with a simulated development environment (via the R2E-Gym environment) using a suite of tools (file editor, search, shell-execution, submit/finish), enabling it to navigate codebases, edit multiple files, compile/run tests, and iteratively produce patches or complete engineering tasks. DeepSWE exhibits emergent behaviors beyond simple code generation; when presented with bugs or feature requests, the agent reasons about edge cases, seeks existing tests in the repository, proposes patches, writes extra tests for regressions, and dynamically adjusts its “thinking” effort.
    Starting Price: Free
  • 11
    DeepScaleR

    DeepScaleR

    Agentica Project

    DeepScaleR is a 1.5-billion-parameter language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning and a novel iterative context-lengthening strategy that gradually increases its context window from 8K to 24K tokens during training. It was trained on ~40,000 carefully curated mathematical problems drawn from competition-level datasets like AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. DeepScaleR achieves 43.1% accuracy on AIME 2024, a roughly 14.3 percentage point boost over the base model, and surpasses the performance of the proprietary O1-Preview model despite its much smaller size. It also posts strong results on a suite of math benchmarks (e.g., MATH-500, AMC 2023, Minerva Math, OlympiadBench), demonstrating that small, efficient models tuned with RL can match or exceed larger baselines on reasoning tasks.
    Starting Price: Free
  • 12
    GLM-4.6V

    GLM-4.6V

    Zhipu AI

    GLM-4.6V is a state-of-the-art open source multimodal vision-language model from the Z.ai (GLM-V) family designed for reasoning, perception, and action. It ships in two variants: a full-scale version (106B parameters) for cloud or high-performance clusters, and a lightweight “Flash” variant (9B) optimized for local deployment or low-latency use. GLM-4.6V supports a native context window of up to 128K tokens during training, enabling it to process very long documents or multimodal inputs. Crucially, it integrates native Function Calling, meaning the model can take images, screenshots, documents, or other visual media as input directly (without manual text conversion), reason about them, and trigger tool calls, bridging “visual perception” with “executable action.” This enables a wide spectrum of capabilities; interleaved image-and-text content generation (for example, combining document understanding with text summarization or generation of image-annotated responses).
    Starting Price: Free
  • 13
    GLM-4.1V

    GLM-4.1V

    Zhipu AI

    GLM-4.1V is a vision-language model, providing a powerful, compact multimodal model designed for reasoning and perception across images, text, and documents. The 9-billion-parameter variant (GLM-4.1V-9B-Thinking) is built on the GLM-4-9B foundation and enhanced through a specialized training paradigm using Reinforcement Learning with Curriculum Sampling (RLCS). It supports a 64k-token context window and accepts high-resolution inputs (up to 4K images, any aspect ratio), enabling it to handle complex tasks such as optical character recognition, image captioning, chart and document parsing, video and scene understanding, GUI-agent workflows (e.g., interpreting screenshots, recognizing UI elements), and general vision-language reasoning. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved top performance on 23 of 28 tasks.
    Starting Price: Free
  • 14
    GLM-4.5V-Flash
    GLM-4.5V-Flash is an open source vision-language model, designed to bring strong multimodal capabilities into a lightweight, deployable package. It supports image, video, document, and GUI inputs, enabling tasks such as scene understanding, chart and document parsing, screen reading, and multi-image analysis. Compared to larger models in the series, GLM-4.5V-Flash offers a compact footprint while retaining core VLM capabilities like visual reasoning, video understanding, GUI task handling, and complex document parsing. It can serve in “GUI agent” workflows, meaning it can interpret screenshots or desktop captures, recognize icons or UI elements, and assist with automated desktop or web-based tasks. Although it forgoes some of the largest-model performance gains, GLM-4.5V-Flash remains versatile for real-world multimodal tasks where efficiency, lower resource usage, and broad modality support are prioritized.
    Starting Price: Free
  • 15
    GLM-4.5V

    GLM-4.5V

    Zhipu AI

    GLM-4.5V builds on the GLM-4.5-Air foundation, using a Mixture-of-Experts (MoE) architecture with 106 billion total parameters and 12 billion activation parameters. It achieves state-of-the-art performance among open-source VLMs of similar scale across 42 public benchmarks, excelling in image, video, document, and GUI-based tasks. It supports a broad range of multimodal capabilities, including image reasoning (scene understanding, spatial recognition, multi-image analysis), video understanding (segmentation, event recognition), complex chart and long-document parsing, GUI-agent workflows (screen reading, icon recognition, desktop automation), and precise visual grounding (e.g., locating objects and returning bounding boxes). GLM-4.5V also introduces a “Thinking Mode” switch, allowing users to choose between fast responses or deeper reasoning when needed.
    Starting Price: Free
  • 16
    Foxglove

    Foxglove

    Foxglove

    Foxglove is a visualization, observability, and data management platform purpose-built for robotics and embodied AI development that centralizes and simplifies working with large, multimodal temporal datasets, including time series, sensor logs, imagery, lidar/point clouds, geospatial maps, and more, in a single, integrated workspace. It enables engineers to record, import, organize, stream, and visualize both live and recorded data from robots using intuitive, customizable dashboards with interactive panels for 3D scenes, plots, raw messages, images, and maps, helping users understand how robots sense, think, and act. Foxglove supports real-time connections to systems like ROS and ROS 2 via bridges and web sockets, enables cross-platform workflows (desktop app for Linux, Windows, and macOS), and facilitates rapid analysis, debugging, and performance optimization by synchronizing diverse data sources in time and space.
    Starting Price: $18 per month
  • 17
    NWarch AI

    NWarch AI

    Daten And Wissen

    Daten & Wissen (DPIIT-recognised; NVIDIA Inception partner) builds NWarch AI, an edge-first video analytics and automation platform that converts existing CCTV and sensor streams into real-time safety, crowd and operational intelligence. We solve fragmented video data, slow manual monitoring, and costly rip-and-replace upgrades by delivering plug-and-play edge inference, natural-language AI agents for on-demand queries, and zero-code automation workflows. Nwarch AI serves construction, manufacturing, logistics, retail and security - enabling faster incident response, automated compliance reports, and measurable efficiency gains.
    Starting Price: 500 per use case per month
  • 18
    GLM-4.7

    GLM-4.7

    Zhipu AI

    GLM-4.7 is an advanced large language model designed to significantly elevate coding, reasoning, and agentic task performance. It delivers major improvements over GLM-4.6 in multilingual coding, terminal-based tasks, and real-world software engineering benchmarks such as SWE-bench and Terminal Bench. GLM-4.7 supports “thinking before acting,” enabling more stable, accurate, and controllable behavior in complex coding and agent workflows. The model also introduces strong gains in UI and frontend generation, producing cleaner webpages, better layouts, and more polished slides. Enhanced tool-using capabilities allow GLM-4.7 to perform more effectively in web browsing, automation, and agent benchmarks. Its reasoning and mathematical performance has improved substantially, showing strong results on advanced evaluation suites. GLM-4.7 is available via Z.ai, API platforms, coding agents, and local deployment for flexible adoption.
    Starting Price: Free
  • 19
    MiniMax-M2.1
    MiniMax-M2.1 is an open-source, agentic large language model designed for advanced coding, tool use, and long-horizon planning. It was released to the community to make high-performance AI agents more transparent, controllable, and accessible. The model is optimized for robustness in software engineering, instruction following, and complex multi-step workflows. MiniMax-M2.1 supports multilingual development and performs strongly across real-world coding scenarios. It is suitable for building autonomous applications that require reasoning, planning, and execution. The model weights are fully open, enabling local deployment and customization. MiniMax-M2.1 represents a major step toward democratizing top-tier agent capabilities.
    Starting Price: Free
  • 20
    Dafthunk

    Dafthunk

    Dafthunk

    Dafthunk is a visual workflow automation platform that lets users build, manage, and deploy serverless automation workflows using a drag-and-drop editor without needing to set up infrastructure or use containers. Workflows are constructed by visually connecting nodes that perform tasks across AI, browser automation, data processing, media generation, integrations, and developer tools, and then executed on Cloudflare’s global edge network with built-in scaling and durable execution. It supports workflow triggers including HTTP webhooks, queues, cron schedules, and manual starts, enabling event-driven, time-based, and custom-initiated automation. It includes persistent workflow state storage and execution history using Cloudflare D1 and R2 storage services. Users can incorporate AI models from providers like OpenAI, Anthropic, Google, and Cloudflare AI for text generation, summarization, vision, NLP, transcription, image creation, and more.
    Starting Price: Free
  • 21
    Happy Coder

    Happy Coder

    Happy Coder

    Happy, also known as Happy Coder, is a free, open source mobile and web client that lets users spawn, view, and control multiple Claude Code AI coding agent sessions on any device, phone, tablet, laptop, or desktop, by syncing them in real time using an encrypted relay architecture so that a session started on one device can be continued seamlessly on another without losing context. It comprises three coordinated components, a CLI program that runs locally to launch and monitor Claude Code, a mobile app or web app that connects securely to the CLI session using end-to-end encryption so nobody (including the relay server) can read your data, and a relay server that simply passes encrypted blobs between devices without access to the contents; this design lets developers maintain their existing tools, editors, and workflows while adding remote control capability.
    Starting Price: Free
  • 22
    Pencil

    Pencil

    Pencil.dev

    Pencil.dev is an AI-powered design-in-code canvas and creative tool that brings visual interface design directly into development environments like Cursor, VS Code, and other IDEs so designers and engineers can work without handoffs between tools. Built around an agent-driven MCP (Model Context Protocol) canvas and an open design format that lives in your codebase, Pencil lets you draw, iterate, and generate pixel-perfect UI screens with AI assistance while keeping the design files versioned in Git alongside your source code, enabling branches, merges, and rollbacks like regular code. It eliminates the friction of switching between tools by embedding a Figma-like canvas into the IDE, supports importing frames and assets from Figma with vectors and styles intact, and lets you manipulate design elements directly with familiar editing panels, layers, and CSS-like properties, while AI models help generate screens, flows, and components in parallel.
    Starting Price: Free
  • 23
    Zo Computer

    Zo Computer

    Zo Computer

    Zo Computer is an always-on AI companion designed to act like your own personal cloud computer. It works 24/7 to schedule meetings, clean your inbox, organize files, and run tasks while you’re away. Users can interact with Zo through its app or simply by texting it commands. Built on a powerful Linux server, Zo gives you full control to host files, build automations, and run projects effortlessly. It supports deep research, web browsing, reminders, and data organization in one unified environment. Zo combines AI, code, and compute into a single system you own. It’s built to help you get real work done, not just chat.
    Starting Price: $18/month
  • 24
    Composer 1
    Composer is Cursor’s custom-built agentic AI model optimized specifically for software engineering tasks and designed to power fast, interactive coding assistance directly within the Cursor IDE, a VS Code-derived editor enhanced with intelligent automation. It is a mixture-of-experts model trained with reinforcement learning (RL) on real-world coding problems across large codebases, so it can produce high-speed, context-aware responses, from code edits and planning to answers that understand project structure, tools, and conventions, with generation speeds roughly four times faster than similar models in benchmarks. Composer is specialized for development workflows, leveraging long-context understanding, semantic search, and limited tool access (like file editing and terminal commands) so it can solve complex engineering requests with efficient and practical outputs.
    Starting Price: $20 per month
  • 25
    Kimi Code CLI

    Kimi Code CLI

    Moonshot AI

    Kimi Code CLI is an AI-powered command-line agent that runs in the terminal to assist developers with software development and terminal operations by reading and editing code, executing shell commands, searching and fetching web pages, autonomously planning and adjusting actions during execution, and providing a shell-like interactive experience where users can describe their needs in natural language or switch to direct command mode; it supports integrations with IDEs and local agent clients via the Agent Client Protocol for enriched workflows and simplifies tasks such as writing and modifying code, fixing bugs, refactoring, exploring unfamiliar projects, answering architecture questions, and automating batch tasks or build and test scripts. Installation is handled via a script that installs the necessary tool manager and then the Kimi CLI package, after which users verify with a version command and configure an API source.
    Starting Price: Free
  • 26
    LobeHub

    LobeHub

    LobeHub

    LobeHub is an open-source AI platform that lets users create, customize, and manage AI agents and assistant teams that grow with their needs, enabling collaboration across workflows and projects with shared context and adaptive behavior. It supports multiple AI models and providers through an intuitive interface, allowing seamless switching and conversations across models while integrating knowledge bases, plugins, and task-specific skills for enhanced productivity. Users can deploy private chat applications and assistants, connect agents to real-world tools and data sources, and organize work into projects, schedules, and workspaces with coordinated agents executing tasks in parallel. LobeHub emphasizes long-term co-evolution between humans and agents through personal memory and continual learning, offering extensible frameworks for multimodal interaction and community contributions, such as an agent marketplace and plugin ecosystem.
    Starting Price: $9.90 per month
  • 27
    Oz

    Oz

    Warp

    Oz is a cloud-based orchestration platform for AI coding agents that lets developers and teams run, manage, automate, and scale unlimited parallel cloud coding agents without building custom infrastructure, providing programmable, auditable, and fully steerable workflows that automate repetitive development tasks and complex code changes. It enables you to launch agents from the CLI, web app, APIs, SDKs, Warp Terminal, or even mobile, orchestrate hundreds of agents in parallel with built-in audit trails, session tracking, and visibility, and monitor or interact with running agents in a shared control plane. Oz supports flexible hosting on your infrastructure or Warp’s, isolates each agent in secure environments, produces real artifacts like plans and pull requests, and handles multi-repo changes so agents can coordinate sweeping updates across large codebases.
    Starting Price: $18 per month
  • 28
    Rowboat

    Rowboat

    Rowboat

    RowBoat is an open source AI-assisted integrated development environment designed to let developers and teams rapidly build, manage, test, and deploy multi-agent AI systems (intelligent assistants) using a visual interface and natural language, while integrating tools and workflows without heavy engineering overhead. It includes RowBoat Studio, where you describe the assistant you want in plain English, and an AI “Copilot” generates the agents, connects them into workflows, and lets you refine and test them in real time before deployment. An assistant is composed of multiple agents, each with access to tools and data sources , that work together to interact with users, perform background tasks, or automate complex workflows, with support for API and Python SDK integration so agents can power conversations or actions inside apps and websites.
    Starting Price: Free
  • 29
    MiniMax M2.5
    MiniMax M2.5 is a frontier AI model engineered for real-world productivity across coding, agentic workflows, search, and office tasks. Extensively trained with reinforcement learning in hundreds of thousands of real-world environments, it achieves state-of-the-art performance in benchmarks such as SWE-Bench Verified and BrowseComp. The model demonstrates strong architectural thinking, decomposing complex problems before generating code across more than ten programming languages. M2.5 operates at high throughput speeds of up to 100 tokens per second, enabling faster completion of multi-step tasks. It is optimized for efficient reasoning, reducing token usage and execution time compared to previous versions. With dramatically lower pricing than competing frontier models, it delivers powerful performance at minimal cost. Integrated into MiniMax Agent, M2.5 supports professional-grade office workflows, financial modeling, and autonomous task execution.
    Starting Price: Free
  • 30
    PicoClaw

    PicoClaw

    PicoClaw

    PicoClaw is an ultra-lightweight AI assistant built in Go and designed to run efficiently on low-cost hardware with minimal resource usage. It operates with less than 10MB of RAM and can boot in under one second, making it significantly faster and more affordable than many traditional AI assistants. The project was refactored from the ground up through a self-bootstrapping process where the AI agent contributed to its own architectural migration and optimization. PicoClaw is portable across RISC-V, ARM, and x86 platforms through a single self-contained binary. It supports deployment via precompiled binaries, source builds, or Docker Compose for flexible setup options. The assistant integrates with multiple chat platforms such as Telegram, Discord, QQ, DingTalk, and LINE for conversational access. With built-in sandboxing and workspace restrictions, PicoClaw emphasizes security while enabling scheduled tasks, long-term memory, and autonomous agent workflows.
    Starting Price: Free
MongoDB Logo MongoDB