Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs). At the heart of Guardrails is the rail spec. rail is intended to be a language-agnostic, human-readable format for specifying structure and type information, validators and corrective actions over LLM outputs. We create a RAIL spec to describe the expected structure and types of the LLM output, the quality criteria for the output to be considered valid, and corrective actions to be taken if the output is invalid.
Features
- Does pydantic-style validation of LLM outputs
- Semantic validation such as checking for bias in generated text, checking for bugs in generated code, etc.
- Takes corrective actions (e.g. reasking LLM) when validation fails
- Enforces structure and type guarantees (e.g. JSON)
- Guardrails provides a format (.rail) for enforcing a specification on an LLM output
- Lightweight wrapper around LLM API calls to implement this spec
Categories
Large Language Models (LLM)License
Apache License V2.0Follow Guardrails
Other Useful Business Software
Crowdtesting That Delivers | Testeum
Testeum connects your software, app, or website to a worldwide network of testers, delivering detailed feedback in under 48 hours. Ensure functionality and refine UX on real devices, all at a fraction of traditional costs. Trusted by startups and enterprises alike, our platform streamlines quality assurance with actionable insights.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Guardrails!