Qwen CodeQwen
|
||||||
Related Products
|
||||||
About
AgentBench is an evaluation framework specifically designed to assess the capabilities and performance of autonomous AI agents. It provides a standardized set of benchmarks that test various aspects of an agent's behavior, such as task-solving ability, decision-making, adaptability, and interaction with simulated environments. By evaluating agents on tasks across different domains, AgentBench helps developers identify strengths and weaknesses in the agents’ performance, such as their ability to plan, reason, and learn from feedback. The framework offers insights into how well an agent can handle complex, real-world-like scenarios, making it useful for both research and practical development. Overall, AgentBench supports the iterative improvement of autonomous agents, ensuring they meet reliability and efficiency standards before wider application.
|
About
Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results on Agentic Coding, Browser‑Use, and Tool‑Use tasks comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and more.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
AI developers wanting a tool to manage and evaluate their LLMs
|
Audience
AI researchers and software engineers interested in a tool providing an agentic coding model with large‑context support and turnkey CLI tools for real‑world, and automated code generation
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
No information available.
Free Version
Free Trial
|
Pricing
Free
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationAgentBench
China
llmbench.ai/agent
|
Company InformationQwen
Founded: 2023
China
github.com/QwenLM/qwen-code
|
|||||
Alternatives |
Alternatives |
|||||
|
|
||||||
|
|
||||||
|
|
|
|||||
|
|
|
|||||
Categories |
Categories |
|||||
Integrations
Alibaba AI Coding Plan
Claude Opus 4.1
Claude Opus 4.5
Claude Opus 4.6
Claude Sonnet 4
Claude Sonnet 4.5
Claude Sonnet 4.6
Node.js
OpenAI
Qwen2.5
|
Integrations
Alibaba AI Coding Plan
Claude Opus 4.1
Claude Opus 4.5
Claude Opus 4.6
Claude Sonnet 4
Claude Sonnet 4.5
Claude Sonnet 4.6
Node.js
OpenAI
Qwen2.5
|
|||||
|
|
|