Compare the Top Large Language Models that integrate with Kilo Code as of January 2026

This a list of Large Language Models that integrate with Kilo Code. Use the filters on the left to add additional filters for products that have integrations with Kilo Code. View the products that work with Kilo Code in the table below.

What are Large Language Models for Kilo Code?

Large language models are artificial neural networks used to process and understand natural language. Commonly trained on large datasets, they can be used for a variety of tasks such as text generation, text classification, question answering, and machine translation. Over time, these models have continued to improve, allowing for better accuracy and greater performance on a variety of tasks. Compare and read user reviews of the best Large Language Models for Kilo Code currently available using the table below. This list is updated regularly.

  • 1
    Claude Sonnet 3.7
    Claude Sonnet 3.7, developed by Anthropic, is a cutting-edge AI model that combines rapid response with deep reflective reasoning. This innovative model allows users to toggle between quick, efficient responses and more thoughtful, reflective answers, making it ideal for complex problem-solving. By allowing Claude to self-reflect before answering, it excels at tasks that require high-level reasoning and nuanced understanding. With its ability to engage in deeper thought processes, Claude Sonnet 3.7 enhances tasks such as coding, natural language processing, and critical thinking applications. Available across various platforms, it offers a powerful tool for professionals and organizations seeking a high-performance, adaptable AI.
    Starting Price: Free
  • 2
    GLM-4.7

    GLM-4.7

    Zhipu AI

    GLM-4.7 is an advanced large language model designed to significantly elevate coding, reasoning, and agentic task performance. It delivers major improvements over GLM-4.6 in multilingual coding, terminal-based tasks, and real-world software engineering benchmarks such as SWE-bench and Terminal Bench. GLM-4.7 supports “thinking before acting,” enabling more stable, accurate, and controllable behavior in complex coding and agent workflows. The model also introduces strong gains in UI and frontend generation, producing cleaner webpages, better layouts, and more polished slides. Enhanced tool-using capabilities allow GLM-4.7 to perform more effectively in web browsing, automation, and agent benchmarks. Its reasoning and mathematical performance has improved substantially, showing strong results on advanced evaluation suites. GLM-4.7 is available via Z.ai, API platforms, coding agents, and local deployment for flexible adoption.
    Starting Price: Free
  • 3
    GLM-4.6

    GLM-4.6

    Zhipu AI

    GLM-4.6 advances upon its predecessor with stronger reasoning, coding, and agentic capabilities: it demonstrates clear improvements in inferential performance, supports tool use during inference, and more effectively integrates into agent frameworks. In benchmark tests spanning reasoning, coding, and agents, GLM-4.6 outperforms GLM-4.5 and shows competitive strength against models such as DeepSeek-V3.2-Exp and Claude Sonnet 4, though it still trails Claude Sonnet 4.5 in pure coding performance. In real-world tests using an extended “CC-Bench” suite across front-end development, tool building, data analysis, and algorithmic tasks, GLM-4.6 beats GLM-4.5 and approaches parity with Claude Sonnet 4, winning ~48.6% of head-to-head comparisons, while also achieving ~15% better token efficiency. GLM-4.6 is available via the Z.ai API, and developers can integrate it as an LLM backend or agent core using the platform’s API.
    Starting Price: Free
  • 4
    MiniMax M2

    MiniMax M2

    MiniMax

    MiniMax M2 is an open source foundation model built specifically for agentic applications and coding workflows, striking a new balance of performance, speed, and cost. It excels in end-to-end development scenarios, handling programming, tool-calling, and complex, long-chain workflows with capabilities such as Python integration, while delivering inference speeds of around 100 tokens per second and offering API pricing at just ~8% of the cost of comparable proprietary models. The model supports “Lightning Mode” for high-speed, lightweight agent tasks, and “Pro Mode” for in-depth full-stack development, report generation, and web-based tool orchestration; its weights are fully open source and available for local deployment with vLLM or SGLang. MiniMax M2 positions itself as a production-ready model that enables agents to complete independent tasks, such as data analysis, programming, tool orchestration, and large-scale multi-step logic at real organizational scale.
    Starting Price: $0.30 per million input tokens
  • 5
    GLM-4.6V

    GLM-4.6V

    Zhipu AI

    GLM-4.6V is a state-of-the-art open source multimodal vision-language model from the Z.ai (GLM-V) family designed for reasoning, perception, and action. It ships in two variants: a full-scale version (106B parameters) for cloud or high-performance clusters, and a lightweight “Flash” variant (9B) optimized for local deployment or low-latency use. GLM-4.6V supports a native context window of up to 128K tokens during training, enabling it to process very long documents or multimodal inputs. Crucially, it integrates native Function Calling, meaning the model can take images, screenshots, documents, or other visual media as input directly (without manual text conversion), reason about them, and trigger tool calls, bridging “visual perception” with “executable action.” This enables a wide spectrum of capabilities; interleaved image-and-text content generation (for example, combining document understanding with text summarization or generation of image-annotated responses).
    Starting Price: Free
  • 6
    GLM-4.1V

    GLM-4.1V

    Zhipu AI

    GLM-4.1V is a vision-language model, providing a powerful, compact multimodal model designed for reasoning and perception across images, text, and documents. The 9-billion-parameter variant (GLM-4.1V-9B-Thinking) is built on the GLM-4-9B foundation and enhanced through a specialized training paradigm using Reinforcement Learning with Curriculum Sampling (RLCS). It supports a 64k-token context window and accepts high-resolution inputs (up to 4K images, any aspect ratio), enabling it to handle complex tasks such as optical character recognition, image captioning, chart and document parsing, video and scene understanding, GUI-agent workflows (e.g., interpreting screenshots, recognizing UI elements), and general vision-language reasoning. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved top performance on 23 of 28 tasks.
    Starting Price: Free
  • 7
    GLM-4.5V-Flash
    GLM-4.5V-Flash is an open source vision-language model, designed to bring strong multimodal capabilities into a lightweight, deployable package. It supports image, video, document, and GUI inputs, enabling tasks such as scene understanding, chart and document parsing, screen reading, and multi-image analysis. Compared to larger models in the series, GLM-4.5V-Flash offers a compact footprint while retaining core VLM capabilities like visual reasoning, video understanding, GUI task handling, and complex document parsing. It can serve in “GUI agent” workflows, meaning it can interpret screenshots or desktop captures, recognize icons or UI elements, and assist with automated desktop or web-based tasks. Although it forgoes some of the largest-model performance gains, GLM-4.5V-Flash remains versatile for real-world multimodal tasks where efficiency, lower resource usage, and broad modality support are prioritized.
    Starting Price: Free
  • 8
    GLM-4.5V

    GLM-4.5V

    Zhipu AI

    GLM-4.5V builds on the GLM-4.5-Air foundation, using a Mixture-of-Experts (MoE) architecture with 106 billion total parameters and 12 billion activation parameters. It achieves state-of-the-art performance among open-source VLMs of similar scale across 42 public benchmarks, excelling in image, video, document, and GUI-based tasks. It supports a broad range of multimodal capabilities, including image reasoning (scene understanding, spatial recognition, multi-image analysis), video understanding (segmentation, event recognition), complex chart and long-document parsing, GUI-agent workflows (screen reading, icon recognition, desktop automation), and precise visual grounding (e.g., locating objects and returning bounding boxes). GLM-4.5V also introduces a “Thinking Mode” switch, allowing users to choose between fast responses or deeper reasoning when needed.
    Starting Price: Free
  • 9
    MiniMax-M2.1
    MiniMax-M2.1 is an open-source, agentic large language model designed for advanced coding, tool use, and long-horizon planning. It was released to the community to make high-performance AI agents more transparent, controllable, and accessible. The model is optimized for robustness in software engineering, instruction following, and complex multi-step workflows. MiniMax-M2.1 supports multilingual development and performs strongly across real-world coding scenarios. It is suitable for building autonomous applications that require reasoning, planning, and execution. The model weights are fully open, enabling local deployment and customization. MiniMax-M2.1 represents a major step toward democratizing top-tier agent capabilities.
    Starting Price: Free
  • Previous
  • You're on page 1
  • Next