FastViT is an efficient vision backbone family that blends convolutional inductive biases with transformer capacity to deliver strong accuracy at mobile and real-time inference budgets. Its design pursues a favorable latency-accuracy Pareto curve, targeting edge devices and server scenarios where throughput and tail latency matter. The models use lightweight attention and carefully engineered blocks to minimize token mixing costs while preserving representation power. Training and inference recipes highlight straightforward integration into common vision tasks such as classification, detection, and segmentation. The codebase provides reference implementations and checkpoints that make it easy to evaluate or fine-tune on downstream datasets. In practice, FastViT offers drop-in backbones that reduce compute and memory pressure without exotic training tricks.
Features
- Hybrid Conv-Transformer blocks optimized for latency
- Competitive accuracy at mobile/edge inference budgets
- Reference training scripts and pretrained checkpoints
- Compatibility with standard detection/segmentation heads
- Memory-efficient attention and token mixing components
- Simple integration into existing PyTorch pipelines