DFlash is an open-source framework for ultra-fast speculative decoding using a lightweight block diffusion model to draft text in parallel with a target large language model, dramatically improving inference speed without sacrificing generation quality. It acts as a “drafter” that proposes likely continuations which the main model then verifies, enabling significant throughput gains compared to traditional autoregressive decoding methods that generate token by token. This approach has been shown to deliver lossless acceleration on models like Qwen3-8B by combining block diffusion techniques with efficient batching, making it ideal for applications where latency and cost matter. The project includes support for multiple draft models, example integration code, and scripts to benchmark performance, and it is structured to work with popular model serving stacks like SGLang and the Hugging Face Transformers ecosystem.

Features

  • Block diffusion based speculative decoding
  • Parallel drafting for accelerated generation
  • Integration examples with SGLang and Transformers
  • Support for multiple draft model sizes
  • Benchmarking and performance scripts
  • Modular, research-friendly architecture

Project Samples

Project Activity

See All Activity >

Categories

AI Models

License

MIT License

Follow DFlash

DFlash Web Site

Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform Icon
Gemini 3 and 200+ AI Models on One Platform

Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of DFlash!

Additional Project Details

Programming Language

Python

Related Categories

Python AI Models

Registered

2026-01-28