Open Source Python Reinforcement Learning Frameworks

Python Reinforcement Learning Frameworks

View 26 business solutions

Browse free open source Python Reinforcement Learning Frameworks and projects below. Use the toggles on the left to filter open source Python Reinforcement Learning Frameworks by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • 1
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    DeepSeek-V3 is a robust Mixture-of-Experts (MoE) language model developed by DeepSeek, featuring a total of 671 billion parameters, with 37 billion activated per token. It employs Multi-head Latent Attention (MLA) and the DeepSeekMoE architecture to enhance computational efficiency. The model introduces an auxiliary-loss-free load balancing strategy and a multi-token prediction training objective to boost performance. Trained on 14.8 trillion diverse, high-quality tokens, DeepSeek-V3 underwent supervised fine-tuning and reinforcement learning to fully realize its capabilities. Evaluations indicate that it outperforms other open-source models and rivals leading closed-source models, achieving this with a training duration of 55 days on 2,048 Nvidia H800 GPUs, costing approximately $5.58 million.
    Downloads: 58 This Week
    Last Update:
    See Project
  • 2
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    DeepSeek-R1 is an open-source large language model developed by DeepSeek, designed to excel in complex reasoning tasks across domains such as mathematics, coding, and language. DeepSeek R1 offers unrestricted access for both commercial and academic use. The model employs a Mixture of Experts (MoE) architecture, comprising 671 billion total parameters with 37 billion active parameters per token, and supports a context length of up to 128,000 tokens. DeepSeek-R1's training regimen uniquely integrates large-scale reinforcement learning (RL) without relying on supervised fine-tuning, enabling the model to develop advanced reasoning capabilities. This approach has resulted in performance comparable to leading models like OpenAI's o1, while maintaining cost-efficiency. To further support the research community, DeepSeek has released distilled versions of the model based on architectures such as LLaMA and Qwen.
    Downloads: 51 This Week
    Last Update:
    See Project
  • 3
    TorchRL

    TorchRL

    A modular, primitive-first, python-first PyTorch library

    TorchRL is an open-source Reinforcement Learning (RL) library for PyTorch. TorchRL provides PyTorch and python-first, low and high-level abstractions for RL that are intended to be efficient, modular, documented, and properly tested. The code is aimed at supporting research in RL. Most of it is written in Python in a highly modular way, such that researchers can easily swap components, transform them, or write new ones with little effort.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 4
    Agent S2

    Agent S2

    Agent S: an open agentic framework that uses computers like a human

    Simular's Agent S2 represents a leap forward in the development of computer-use agents, capable of autonomously interacting with a range of devices and interfaces. By integrating specialized AI models, Agent S2 delivers state-of-the-art performance, whether on desktop systems or smartphones. Through modular architecture, it efficiently handles complex tasks, such as navigating UIs, performing low-level actions like text selection, and executing high-level strategies like planning. Additionally, the system's proactive hierarchical planning allows for real-time adaptation, making it an ideal solution for businesses seeking to streamline operations and automate digital workflows. Agent S2 is designed with flexibility, enabling seamless scaling for future applications and tasks.
    Downloads: 8 This Week
    Last Update:
    See Project
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • 5
    Multi-Agent Orchestrator

    Multi-Agent Orchestrator

    Flexible and powerful framework for managing multiple AI agents

    Multi-Agent Orchestrator is an AI coordination framework that enables multiple intelligent agents to work together to complete complex, multi-step workflows.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 6
    Trax

    Trax

    Deep learning with clear code and speed

    Trax is an end-to-end library for deep learning that focuses on clear code and speed. It is actively used and maintained in the Google Brain team. Run a pre-trained Transformer, create a translator in a few lines of code. Features and resources, API docs, where to talk to us, how to open an issue and more. Walkthrough, how Trax works, how to make new models and train on your own data. Trax includes basic models (like ResNet, LSTM, Transformer) and RL algorithms (like REINFORCE, A2C, PPO). It is also actively used for research and includes new models like the Reformer and new RL algorithms like AWR. Trax has bindings to a large number of deep learning datasets, including Tensor2Tensor and TensorFlow datasets. You can use Trax either as a library from your own python scripts and notebooks or as a binary from the shell, which can be more convenient for training large models. It runs without any changes on CPUs, GPUs and TPUs.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 7
    AnyTrading

    AnyTrading

    The most simple, flexible, and comprehensive OpenAI Gym trading

    gym-anytrading is an OpenAI Gym-compatible environment designed for developing and testing reinforcement learning algorithms on trading strategies. It simulates trading environments for financial markets, including stocks and forex.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 8
    H2O LLM Studio

    H2O LLM Studio

    Framework and no-code GUI for fine-tuning LLMs

    Welcome to H2O LLM Studio, a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell. With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start training your model. Start by creating an experiment. You can then monitor and manage your experiment, compare experiments, or push the model to Hugging Face to share it with the community.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 9
    PyBoy

    PyBoy

    Game Boy emulator written in Python

    It is highly recommended to read the report to get a light introduction to Game Boy emulation. But do be aware, that the Python implementation has changed a lot. The report is relevant, even though you want to contribute to another emulator or create your own. If you are looking to make a bot or AI, you can find all the external components in the PyBoy Documentation. There is also a short example on our Wiki page Scripts, AI and Bots as well as in the examples directory. If more features are needed, or if you find a bug, don't hesitate to make an issue here on GitHub, or write on our Discord channel. If you need more details, or if you need to compile from source, check out the detailed installation instructions. We support: macOS, Raspberry Pi (Raspbian), Linux (Ubuntu), and Windows 10.
    Downloads: 6 This Week
    Last Update:
    See Project
  • No-Nonsense Code-to-Cloud Security for Devs | Aikido Icon
    No-Nonsense Code-to-Cloud Security for Devs | Aikido

    Connect your GitHub, GitLab, Bitbucket, or Azure DevOps account to start scanning your repos for free.

    Aikido provides a unified security platform for developers, combining 12 powerful scans like SAST, DAST, and CSPM. AI-driven AutoFix and AutoTriage streamline vulnerability management, while runtime protection blocks attacks.
    Start for Free
  • 10
    Weights and Biases

    Weights and Biases

    Tool for visualizing and tracking your machine learning experiments

    Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production models. Quickly identify model regressions. Use W&B to visualize results in real time, all in a central dashboard. Focus on the interesting ML. Spend less time manually tracking results in spreadsheets and text files. Capture dataset versions with W&B Artifacts to identify how changing data affects your resulting models. Reproduce any model, with saved code, hyperparameters, launch commands, input data, and resulting model weights. Set wandb.config once at the beginning of your script to save your hyperparameters, input settings (like dataset name or model type), and any other independent variables for your experiments. This is useful for analyzing your experiments and reproducing your work in the future. Setting configs also allows you to visualize the relationships between features of your model architecture or data pipeline and model performance.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 11
    WikiSQL

    WikiSQL

    A large annotated semantic parsing corpus for developing NL interfaces

    A large crowd-sourced dataset for developing natural language interfaces for relational databases. WikiSQL is the dataset released along with our work Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning. Regarding tokenization and Stanza, when WikiSQL was written 3-years ago, it relied on Stanza, a CoreNLP python wrapper that has since been deprecated. If you'd still like to use the tokenizer, please use the docker image. We do not anticipate switching to the current Stanza as changes to the tokenizer would render the previous results not reproducible.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 12
    AgentUniverse

    AgentUniverse

    agentUniverse is a LLM multi-agent framework

    AgentUniverse is a multi-agent AI framework that enables coordination between multiple intelligent agents for complex task execution and automation.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 13
    MedicalGPT

    MedicalGPT

    MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training

    MedicalGPT training medical GPT model with ChatGPT training pipeline, implementation of Pretraining, Supervised Finetuning, Reward Modeling and Reinforcement Learning. MedicalGPT trains large medical models, including secondary pre-training, supervised fine-tuning, reward modeling, and reinforcement learning training.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 14
    dm_control

    dm_control

    DeepMind's software stack for physics-based simulation

    DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo. DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo physics. The MuJoCo Python bindings support three different OpenGL rendering backends: EGL (headless, hardware-accelerated), GLFW (windowed, hardware-accelerated), and OSMesa (purely software-based). At least one of these three backends must be available in order render through dm_control. Hardware rendering with a windowing system is supported via GLFW and GLEW. On Linux these can be installed using your distribution's package manager. "Headless" hardware rendering (i.e. without a windowing system such as X11) requires EXT_platform_device support in the EGL driver. While dm_control has been largely updated to use the pybind11-based bindings provided via the mujoco package, at this time it still relies on some legacy components that are automatically generated.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 15
    OpenRLHF

    OpenRLHF

    An Easy-to-use, Scalable and High-performance RLHF Framework

    OpenRLHF is an easy-to-use, scalable, and high-performance framework for Reinforcement Learning with Human Feedback (RLHF). It supports various training techniques and model architectures.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 16
    Project Malmo

    Project Malmo

    A platform for Artificial Intelligence experimentation on Minecraft

    How can we develop artificial intelligence that learns to make sense of complex environments? That learns from others, including humans, how to interact with the world? That learns transferable skills throughout its existence, and applies them to solve new, challenging problems? Project Malmo sets out to address these core research challenges, addressing them by integrating (deep) reinforcement learning, cognitive science, and many ideas from artificial intelligence. The Malmo platform is a sophisticated AI experimentation platform built on top of Minecraft, and designed to support fundamental research in artificial intelligence. The Project Malmo platform consists of a mod for the Java version, and code that helps artificial intelligence agents sense and act within the Minecraft environment. The two components can run on Windows, Linux, or Mac OS, and researchers can program their agents in any programming language they’re comfortable with.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 17
    Ray

    Ray

    A unified framework for scalable computing

    Modern workloads like deep learning and hyperparameter tuning are compute-intensive and require distributed or parallel execution. Ray makes it effortless to parallelize single machine code — go from a single CPU to multi-core, multi-GPU or multi-node with minimal code changes. Accelerate your PyTorch and Tensorflow workload with a more resource-efficient and flexible distributed execution framework powered by Ray. Accelerate your hyperparameter search workloads with Ray Tune. Find the best model and reduce training costs by using the latest optimization algorithms. Deploy your machine learning models at scale with Ray Serve, a Python-first and framework agnostic model serving framework. Scale reinforcement learning (RL) with RLlib, a framework-agnostic RL library that ships with 30+ cutting-edge RL algorithms including A3C, DQN, and PPO. Easily build out scalable, distributed systems in Python with simple and composable primitives in Ray Core.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 18
    Best-of Machine Learning with Python

    Best-of Machine Learning with Python

    A ranked list of awesome machine learning Python libraries

    This curated list contains 900 awesome open-source projects with a total of 3.3M stars grouped into 34 categories. All projects are ranked by a project-quality score, which is calculated based on various metrics automatically collected from GitHub and different package managers. If you like to add or update projects, feel free to open an issue, submit a pull request, or directly edit the projects.yaml. Contributions are very welcome! General-purpose machine learning and deep learning frameworks.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    Deep Reinforcement Learning for Keras

    Deep Reinforcement Learning for Keras

    Deep Reinforcement Learning for Keras.

    keras-rl implements some state-of-the-art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Furthermore, keras-rl works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy. Of course, you can extend keras-rl according to your own needs. You can use built-in Keras callbacks and metrics or define your own. Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes. Documentation is available online.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    Google Research Football

    Google Research Football

    Check out the new game server

    Google Research Football is a reinforcement learning environment simulating soccer matches. It focuses on learning complex behaviors such as team collaboration and strategy formation in competitive settings.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    Gymnasium

    Gymnasium

    An API standard for single-agent reinforcement learning environments

    Gymnasium is a fork of OpenAI Gym, maintained by the Farama Foundation, that provides a standardized API for reinforcement learning environments. It improves upon Gym with better support, maintenance, and additional features while maintaining backward compatibility.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    Habitat-Lab

    Habitat-Lab

    A modular high-level library to train embodied AI agents

    Habitat-Lab is a modular high-level library for end-to-end development in embodied AI. It is designed to train agents to perform a wide variety of embodied AI tasks in indoor environments, as well as develop agents that can interact with humans in performing these tasks. Allowing users to train agents in a wide variety of single and multi-agent tasks (e.g. navigation, rearrangement, instruction following, question answering, human following), as well as define novel tasks. Configuring and instantiating a diverse set of embodied agents, including commercial robots and humanoids, specifying their sensors and capabilities. Providing algorithms for single and multi-agent training (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), as well as tools to benchmark their performance on the defined tasks using standard metrics.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 23
    RWARE

    RWARE

    MuA multi-agent reinforcement learning environment

    robotic-warehouse is a simulation environment and framework for robotic warehouse automation, enabling research and development of AI and robotic agents to manage warehouse logistics, such as item picking and transport.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    Brax

    Brax

    Massively parallel rigidbody physics simulation

    Brax is a fast and fully differentiable physics engine for large-scale rigid body simulations, built on JAX. It is designed for research in reinforcement learning and robotics, enabling efficient simulations and gradient-based optimization.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    DouZero

    DouZero

    [ICML 2021] DouZero: Mastering DouDizhu

    DouZero is a reinforcement learning-based AI for playing DouDizhu, a popular Chinese card game. It focuses on perfecting AI strategies for competitive play using value-based deep RL techniques.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.