DeepSeek-Coder is a series of code-specialized language models designed to generate, complete, and infill code (and mixed code + natural language) with high fluency in both English and Chinese. The models are trained from scratch on a massive corpus (~2 trillion tokens), of which about 87% is code and 13% is natural language. This dataset covers project-level code structure (not just line-by-line snippets), using a large context window (e.g. 16K) and a secondary fill-in-the-blank objective to encourage better contextual completions and infilling. Multiple sizes of the model are offered (e.g. 1B, 5.7B, 6.7B, 33B) so users can trade off inference cost vs capability. The repo provides model weights, documentation on training setup, evaluation results on common benchmarks (HumanEval, MultiPL-E, APPS, etc.), and inference tools.

Features

  • Multiple model sizes (1 B, 5.7 B, 6.7 B, 33 B) to suit different compute & use cases
  • Trained from scratch on ~2 trillion tokens, with 87% code and 13% natural language
  • Project-level context window (16K) and fill-in-the-blank objective for better infilling
  • Strong performance on code benchmarks (HumanEval, MultiPL-E, APPS, etc.)
  • Permissive license with “responsible downstream use” clause
  • Inference tooling and evaluation scripts for code generation and benchmarking

Project Samples

Project Activity

See All Activity >

Categories

AI Models

License

MIT License

Follow DeepSeek Coder

DeepSeek Coder Web Site

Other Useful Business Software
Go From AI Idea to AI App Fast Icon
Go From AI Idea to AI App Fast

One platform to build, fine-tune, and deploy ML models. No MLOps team required.

Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of DeepSeek Coder!

Additional Project Details

Programming Language

Python

Related Categories

Python AI Models

Registered

2025-10-03