• Find Hidden Risks in Windows Task Scheduler Icon
    Find Hidden Risks in Windows Task Scheduler

    Free diagnostic script reveals configuration issues, error patterns, and security risks. Instant HTML report.

    Windows Task Scheduler might be hiding critical failures. Download the free JAMS diagnostic tool to uncover problems before they impact production—get a color-coded risk report with clear remediation steps in minutes.
    Download Free Tool
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 1
    OpenRLHF

    OpenRLHF

    An Easy-to-use, Scalable and High-performance RLHF Framework

    OpenRLHF is an easy-to-use, scalable, and high-performance framework for Reinforcement Learning with Human Feedback (RLHF). It supports various training techniques and model architectures.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    PettingZoo

    PettingZoo

    An API standard for multi-agent reinforcement learning environments

    PettingZoo is a standardized API and library for multi-agent reinforcement learning (MARL) environments. It provides a broad set of environments and tools to facilitate the development and evaluation of multi-agent algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    TorchRL

    TorchRL

    A modular, primitive-first, python-first PyTorch library

    TorchRL is an open-source Reinforcement Learning (RL) library for PyTorch. TorchRL provides PyTorch and python-first, low and high-level abstractions for RL that are intended to be efficient, modular, documented, and properly tested. The code is aimed at supporting research in RL. Most of it is written in Python in a highly modular way, such that researchers can easily swap components, transform them, or write new ones with little effort.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    H2O LLM Studio

    H2O LLM Studio

    Framework and no-code GUI for fine-tuning LLMs

    ...You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell. With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start training your model. Start by creating an experiment. You can then monitor and manage your experiment, compare experiments, or push the model to Hugging Face to share it with the community.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    EvoTorch

    EvoTorch

    Advanced evolutionary computation library built on top of PyTorch

    EvoTorch is an evolutionary optimization framework built on top of PyTorch, developed by NNAISENSE. It is designed for large-scale optimization problems, particularly those that require evolutionary algorithms rather than gradient-based methods.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    CORL

    CORL

    High-quality single-file implementations of SOTA Offline

    CORL (Collection of Reinforcement Learning Environments for Control Tasks) is a modular and extensible set of high-quality reinforcement learning environments focused on continuous control and robotics. It aims to offer standardized environments suitable for benchmarking state-of-the-art RL algorithms in control tasks, including physics-based simulations and custom-designed scenarios.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    ElegantRL

    ElegantRL

    Massively Parallel Deep Reinforcement Learning

    ElegantRL is an efficient and flexible deep reinforcement learning framework designed for researchers and practitioners. It focuses on simplicity, high performance, and supporting advanced RL algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    CleanRL

    CleanRL

    High-quality single file implementation of Deep Reinforcement Learning

    ...CleanRL is not a modular library and therefore it is not meant to be imported. At the cost of duplicate code, we make all implementation details of a DRL algorithm variant easy to understand, so CleanRL comes with its own pros and cons. You should consider using CleanRL if you want to 1) understand all implementation details of an algorithm's variant or 2) prototype advanced features that other modular DRL libraries do not support (CleanRL has minimal lines of code so it gives you great debugging experience and you don't have to do a lot of subclassing like sometimes in modular DRL libraries).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Gym

    Gym

    Toolkit for developing and comparing reinforcement learning algorithms

    ...It supports teaching agents, everything from walking to playing games like Pong or Pinball. Open source interface to reinforce learning tasks. The gym library provides an easy-to-use suite of reinforcement learning tasks. Gym provides the environment, you provide the algorithm. You can write your agent using your existing numerical computation library, such as TensorFlow or Theano. It makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 10
    RLCard

    RLCard

    Reinforcement Learning / AI Bots in Card (Poker) Games

    RLCard is a toolkit for reinforcement learning research on card games. It includes several popular card games and focuses on learning algorithms for imperfect information games like poker and blackjack.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    ReinventCommunity

    ReinventCommunity

    Jupyter Notebook tutorials for REINVENT 3.2

    This repository is a collection of useful jupyter notebooks, code snippets and example JSON files illustrating the use of Reinvent 3.2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Texar

    Texar

    Toolkit for Machine Learning, Natural Language Processing

    Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides a library of easy-to-use ML modules and functionalities for composing whatever models and algorithms. The tool is designed for both researchers and practitioners for fast prototyping and experimentation. Texar was originally developed and is actively contributed by Petuum and CMU in collaboration with other institutes. A mirror of this repository is maintained by Petuum Open Source. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Coach

    Coach

    Enables easy experimentation with state of the art algorithms

    ...With Coach, it is possible to model an agent by combining various building blocks, and training the agent on multiple environments. The available environments allow testing the agent in different fields such as robotics, autonomous driving, games and more. It exposes a set of easy-to-use APIs for experimenting with new RL algorithms and allows simple integration of new environments to solve. Coach collects statistics from the training process and supports advanced visualization techniques for debugging the agent being trained. Coach supports many state-of-the-art reinforcement learning algorithms, which are separated into three main classes - value optimization, policy optimization, and imitation learning. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Easy-TensorFlow

    Easy-TensorFlow

    Simple and comprehensive tutorials in TensorFlow

    The goal of this repository is to provide comprehensive tutorials for TensorFlow while maintaining the simplicity of the code. Each tutorial includes a detailed explanation (written in .ipynb) format, as well as the source code (in .py format). There is a necessity to address the motivations for this project. TensorFlow is one of the deep learning frameworks available with the largest community. This repository is dedicated to suggesting a simple path to learn TensorFlow. In addition to the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Deep Reinforcement Learning for Keras

    Deep Reinforcement Learning for Keras

    Deep Reinforcement Learning for Keras.

    keras-rl implements some state-of-the-art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Furthermore, keras-rl works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy. Of course, you can extend keras-rl according to your own needs. You can use built-in Keras callbacks and metrics or define your own. Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes. Documentation is available online.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next