SWE-1.5

SWE-1.5

Cognition
+
+

Related Products

  • Vertex AI
    827 Ratings
    Visit Website
  • LM-Kit.NET
    23 Ratings
    Visit Website
  • kama DEI
    8 Ratings
    Visit Website
  • RunPod
    205 Ratings
    Visit Website
  • Talkdesk
    3,319 Ratings
    Visit Website
  • Retool
    566 Ratings
    Visit Website
  • Windsurf Editor
    159 Ratings
    Visit Website
  • Assembled
    233 Ratings
    Visit Website
  • AnalyticsCreator
    46 Ratings
    Visit Website
  • Orca Security
    495 Ratings
    Visit Website

About

SWE-1.5 is the latest agent-model release by Cognition, purpose-built for software engineering and characterized by a “frontier-size” architecture comprising hundreds of billions of parameters and optimized end-to-end (model, inference engine, and agent harness) for both speed and intelligence. It achieves near-state-of-the-art coding performance and sets a new benchmark in latency, delivering inference speeds up to 950 tokens/second, roughly six times faster than its predecessor Haiku 4.5 and thirteen times faster than Sonnet 4.5. The model was trained using extensive reinforcement learning in realistic coding-agent environments with multi-turn workflows, unit tests, quality rubrics, and browser-based agentic execution; it also benefits from tightly integrated software tooling and high-throughput hardware (including thousands of GB200 NVL72 chips and a custom hypervisor infrastructure).

About

Step 3.5 Flash is an advanced open source foundation language model engineered for frontier reasoning and agentic capabilities with exceptional efficiency, built on a sparse Mixture of Experts (MoE) architecture that selectively activates only about 11 billion of its ~196 billion parameters per token to deliver high-density intelligence and real-time responsiveness. Its 3-way Multi-Token Prediction (MTP-3) enables generation throughput in the hundreds of tokens per second for complex multi-step reasoning chains and task execution, and it supports efficient long contexts with a hybrid sliding window attention approach that reduces computational overhead across large datasets or codebases. It demonstrates robust performance on benchmarks for reasoning, coding, and agentic tasks, rivaling or exceeding many larger proprietary models, and includes a scalable reinforcement learning framework for consistent self-improvement.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Software engineering teams and organizations looking for a tool offering an AI coding agent capable of rapid, accurate code generation, large-scale codebase navigation and real-time developer augmentation

Audience

Developers, researchers, and AI engineers who want a powerful open source foundational AI model capable of fast, deep reasoning, coding assistance, and agentic task execution

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

Free
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Cognition
Founded: 2023
United States
cognition.ai/blog/swe-1-5

Company Information

StepFun
Founded: 2023
China
static.stepfun.com/blog/step-3.5-flash/

Alternatives

Alternatives

MiMo-V2-Flash

MiMo-V2-Flash

Xiaomi Technology
GLM-4.6

GLM-4.6

Zhipu AI
GLM-4.5

GLM-4.5

Z.ai
MiniMax M2.5

MiniMax M2.5

MiniMax
MiniMax M2.5

MiniMax M2.5

MiniMax
MiMo-V2-Flash

MiMo-V2-Flash

Xiaomi Technology
DeepSeek-V2

DeepSeek-V2

DeepSeek
Composer 1

Composer 1

Cursor

Categories

Categories

Integrations

GitHub
Hugging Face
ModelScope
arXiv

Integrations

GitHub
Hugging Face
ModelScope
arXiv
Claim SWE-1.5 and update features and information
Claim SWE-1.5 and update features and information
Claim Step 3.5 Flash and update features and information
Claim Step 3.5 Flash and update features and information