gpt-oss is OpenAI’s open-weight family of large language models designed for powerful reasoning, agentic workflows, and versatile developer use cases. The series includes two main models: gpt-oss-120b, a 117-billion parameter model optimized for general-purpose, high-reasoning tasks that can run on a single H100 GPU, and gpt-oss-20b, a lighter 21-billion parameter model ideal for low-latency or specialized applications on smaller hardware. Both models use a native MXFP4 quantization for efficient memory use and support OpenAI’s Harmony response format, enabling transparent full chain-of-thought reasoning and advanced tool integrations such as function calling, browsing, and Python code execution. The repository provides multiple reference implementations—including PyTorch, Triton, and Metal—for educational and experimental use, as well as example clients and tools like a terminal chat app and a Responses API server.
Features
- Two model sizes: gpt-oss-120b (117B params) and gpt-oss-20b (21B params)
- Native MXFP4 quantization for MoE layers enabling efficient inference
- Supports full chain-of-thought reasoning with configurable effort levels (low, medium, high)
- Harmony response format for standardized, debuggable model output
- Built-in agentic tool capabilities: function calling, web browsing, Python code execution, structured outputs
- Multiple inference backends: PyTorch, Triton (optimized), Metal (Apple Silicon)
- Reference tools and clients: terminal chat app, Responses API example server
- Licensed under permissive Apache 2.0 for experimentation, customization, and commercial deployment