lm-human-preferences is the official OpenAI codebase that implements the method from the paper Fine-Tuning Language Models from Human Preferences. Its purpose is to show how to align language models with human judgments by training a reward model from human comparisons and then fine-tuning a policy model using that reward signal. The repository includes scripts to train the reward model (learning to rank or score pairs of outputs), and to fine-tune a policy (a language model) with...