PKU Beaver
Constrained Value Alignment via Safe Reinforcement Learning
...The framework introduces techniques that separate helpfulness and harmlessness signals during training, allowing models to optimize for useful responses while minimizing harmful behavior. To support this process, the project provides datasets containing human-labeled examples that encode both performance preferences and safety constraints across multiple dimensions. These annotations include categories such as harmful language, unethical behavior, privacy violations, and other sensitive topics. By incorporating constraint-based optimization methods, Safe-RLHF trains models that balance reward objectives with safety requirements, ensuring that harmful outputs are penalized during training.