A self-hardening prompt injection detector. Rebuff is designed to protect AI applications from prompt injection (PI) attacks through a multi-layered defense. Rebuff is still a prototype and cannot provide 100% protection against prompt injection attacks. Add canary tokens to prompts to detect leakages, allowing the framework to store embeddings about the incoming prompt in the vector database and prevent future attacks.
Features
- Filter out potentially malicious input before it reaches the LLM
- Use a dedicated LLM to analyze incoming prompts and identify potential attacks
- Store embeddings of previous attacks in a vector database to recognize and prevent similar attacks in the future
- Add canary tokens to prompts to detect leakages, allowing the framework to store embeddings about the incoming prompt in the vector database and prevent future attacks
- Heuristics
- LLM-based detection
- VectorDB
Categories
Artificial IntelligenceLicense
Apache License V2.0Follow Rebuff
You Might Also Like
Red Hat Enterprise Linux (RHEL) on Microsoft Azure provides a secure, reliable, and flexible foundation for your cloud infrastructure. Red Hat Enterprise Linux on Microsoft Azure is ideal for enterprises seeking to enhance their cloud environment with seamless integration, consistent performance, and comprehensive support.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Rebuff!