Phi-4-mini-flash-reasoningMicrosoft
|
Step 3.5 FlashStepFun
|
|||||
Related Products
|
||||||
About
Phi-4-mini-flash-reasoning is a 3.8 billion‑parameter open model in Microsoft’s Phi family, purpose‑built for edge, mobile, and other resource‑constrained environments where compute, memory, and latency are tightly limited. It introduces the SambaY decoder‑hybrid‑decoder architecture with Gated Memory Units (GMUs) interleaved alongside Mamba state‑space and sliding‑window attention layers, delivering up to 10× higher throughput and a 2–3× reduction in latency compared to its predecessor without sacrificing advanced math and logic reasoning performance. Supporting a 64 K‑token context length and fine‑tuned on high‑quality synthetic data, it excels at long‑context retrieval, reasoning tasks, and real‑time inference, all deployable on a single GPU. Phi-4-mini-flash-reasoning is available today via Azure AI Foundry, NVIDIA API Catalog, and Hugging Face, enabling developers to build fast, scalable, logic‑intensive applications.
|
About
Step 3.5 Flash is an advanced open source foundation language model engineered for frontier reasoning and agentic capabilities with exceptional efficiency, built on a sparse Mixture of Experts (MoE) architecture that selectively activates only about 11 billion of its ~196 billion parameters per token to deliver high-density intelligence and real-time responsiveness. Its 3-way Multi-Token Prediction (MTP-3) enables generation throughput in the hundreds of tokens per second for complex multi-step reasoning chains and task execution, and it supports efficient long contexts with a hybrid sliding window attention approach that reduces computational overhead across large datasets or codebases. It demonstrates robust performance on benchmarks for reasoning, coding, and agentic tasks, rivaling or exceeding many larger proprietary models, and includes a scalable reinforcement learning framework for consistent self-improvement.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
AI professionals and developers searching for a tool to power advanced inference on edge and mobile platforms
|
Audience
Developers, researchers, and AI engineers who want a powerful open source foundational AI model capable of fast, deep reasoning, coding assistance, and agentic task execution
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
No information available.
Free Version
Free Trial
|
Pricing
Free
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationMicrosoft
Founded: 1975
United States
azure.microsoft.com/en-us/blog/reasoning-reimagined-introducing-phi-4-mini-flash-reasoning/
|
Company InformationStepFun
Founded: 2023
China
static.stepfun.com/blog/step-3.5-flash/
|
|||||
Alternatives |
Alternatives |
|||||
|
|
|
|||||
|
|
|
|||||
|
|
|
|||||
|
|
|
|||||
Categories |
Categories |
|||||
Integrations
Hugging Face
GitHub
Microsoft 365 Copilot
Microsoft Foundry
Microsoft Foundry Agent Service
ModelScope
NVIDIA DRIVE
arXiv
|
Integrations
Hugging Face
GitHub
Microsoft 365 Copilot
Microsoft Foundry
Microsoft Foundry Agent Service
ModelScope
NVIDIA DRIVE
arXiv
|
|||||
|
|
|