DeepCoder

DeepCoder

Agentica Project
LTM-2-mini

LTM-2-mini

Magic AI
+
+

Related Products

  • ZeroPath
    2 Ratings
    Visit Website
  • PackageX OCR Scanning
    46 Ratings
    Visit Website
  • JetBrains Junie
    2 Ratings
    Visit Website
  • OORT DataHub
    13 Ratings
    Visit Website
  • CMW Platform
    680 Ratings
    Visit Website
  • Aikido Security
    123 Ratings
    Visit Website
  • Parasoft
    136 Ratings
    Visit Website
  • Setplex
    10 Ratings
    Visit Website
  • RunPod
    180 Ratings
    Visit Website
  • optivalue.ai
    3 Ratings
    Visit Website

About

DeepCoder is a fully open source code-reasoning and generation model released by Agentica Project in collaboration with Together AI. It is fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, achieving a 60.6% accuracy on LiveCodeBench (representing an 8% improvement over the base), a performance level that matches that of proprietary models such as o3-mini (2025-01-031 Low) and o1 while using only 14 billion parameters. It was trained over 2.5 weeks on 32 H100 GPUs with a curated dataset of roughly 24,000 coding problems drawn from verified sources (including TACO-Verified, PrimeIntellect SYNTHETIC-1, and LiveCodeBench submissions), each problem requiring a verifiable solution and at least five unit tests to ensure reliability for RL training. To handle long-range context, DeepCoder employs techniques such as iterative context lengthening and overlong filtering.

About

LTM-2-mini is a 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels. For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B1 for a 100M token context window. The contrast in memory requirements is even larger – running Llama 3.1 405B with a 100M token context requires 638 H100s per user just to store a single 100M token KV cache.2 In contrast, LTM requires a small fraction of a single H100’s HBM per user for the same context.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers, researchers, and enthusiasts wanting a tool to generate, debug, or reason about code without relying on proprietary models

Audience

AI developers interested in a 100M token context model LLM

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

Free
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Agentica Project
Founded: 2025
United States
agentica-project.com

Company Information

Magic AI
Founded: 2022
United States
magic.dev/

Alternatives

DeepSWE

DeepSWE

Agentica Project

Alternatives

MiniMax M1

MiniMax M1

MiniMax
Devstral 2

Devstral 2

Mistral AI
Devstral Small 2

Devstral Small 2

Mistral AI
GPT-5 mini

GPT-5 mini

OpenAI
DeepScaleR

DeepScaleR

Agentica Project
GPT-4o mini

GPT-4o mini

OpenAI

Categories

Categories

Integrations

Hugging Face
Together AI

Integrations

Hugging Face
Together AI
Claim DeepCoder and update features and information
Claim DeepCoder and update features and information
Claim LTM-2-mini and update features and information
Claim LTM-2-mini and update features and information