DeepSeek-Coder is a series of code-specialized language models designed to generate, complete, and infill code (and mixed code + natural language) with high fluency in both English and Chinese. The models are trained from scratch on a massive corpus (~2 trillion tokens), of which about 87% is code and 13% is natural language. This dataset covers project-level code structure (not just line-by-line snippets), using a large context window (e.g. 16K) and a secondary fill-in-the-blank objective to encourage better contextual completions and infilling. Multiple sizes of the model are offered (e.g. 1B, 5.7B, 6.7B, 33B) so users can trade off inference cost vs capability. The repo provides model weights, documentation on training setup, evaluation results on common benchmarks (HumanEval, MultiPL-E, APPS, etc.), and inference tools.
Features
- Multiple model sizes (1 B, 5.7 B, 6.7 B, 33 B) to suit different compute & use cases
- Trained from scratch on ~2 trillion tokens, with 87% code and 13% natural language
- Project-level context window (16K) and fill-in-the-blank objective for better infilling
- Strong performance on code benchmarks (HumanEval, MultiPL-E, APPS, etc.)
- Permissive license with “responsible downstream use” clause
- Inference tooling and evaluation scripts for code generation and benchmarking