gpt-oss-safeguard is an open-weight reasoning model family released by OpenAI designed specifically for content safety and moderation tasks. Rather than just outputting a numeric “safety score,” it is trained to reason about content with respect to a user-provided policy, allowing flexible, customizable moderation definitions rather than fixed rules — ideal when different platforms have different safety standards. The model comes in at least two variants: a large 120B-parameter version for heavy-duty, high-accuracy reasoning, and a 20B-parameter version optimized for lower latency or smaller compute resources. At inference time you supply both the content and your own safety policy (written in a structured prompt), and the model will evaluate the content and return its justification — enabling transparent, auditable moderation decisions. ...