trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset. Training support for Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b, EleutherAI/gpt-neox-20b, and google/flan-t5-xxl. For models beyond 20B parameters, trlX provides NVIDIA NeMo-backed trainers that leverage efficient parallelism techniques to scale effectively.

Features

  • Use a reward function
  • Documentation available
  • Use a reward-labeled dataset
  • Launch distributed training
  • Logging
  • Examples included

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Transformer Reinforcement Learning X

Transformer Reinforcement Learning X Web Site

Other Useful Business Software
Our Free Plans just got better! | Auth0 Icon
Our Free Plans just got better! | Auth0

With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
Try free now
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Transformer Reinforcement Learning X!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Python

Related Categories

Python Machine Learning Software, Python Reinforcement Learning Frameworks, Python Reinforcement Learning Libraries, Python Reinforcement Learning Algorithms

Registered

2024-08-02