Step3-VL-10B is an open-source multimodal foundation model developed by StepFun AI that pushes the boundaries of what compact models can achieve by combining visual and language understanding in a single architecture. Despite having only about 10 billion parameters, it delivers performance that rivals or even surpasses much larger models (10×–20× larger) on a wide range of multimodal benchmarks covering reasoning, perception, and complex tasks, positioning it as one of the most powerful models in its class. It achieves this efficiency and strong performance through unified pre-training on a massive 1.2 trillion-token multimodal corpus that jointly optimizes a language-aligned perception encoder with a powerful decoder, creating deep synergy between image processing and text understanding.
Features
- Vision-language multimodal foundation model capable of image + text understanding
- Compact 10 billion parameter size with performance rivaling much larger models
- Unified pre-training on 1.2 trillion multimodal tokens
- Post-training pipeline with supervised finetuning and reinforcement learning
- Parallel Coordinated Reasoning (PaCoRe) for enhanced perceptual reasoning
- Open-source release with downloadable weights and inference support