• Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • 1
    H2O LLM Studio

    H2O LLM Studio

    Framework and no-code GUI for fine-tuning LLMs

    Welcome to H2O LLM Studio, a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell. With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start training your model. Start by creating an experiment. You can then monitor and manage your experiment, compare experiments, or push the model to Hugging Face to share it with the community.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    dm_control

    dm_control

    DeepMind's software stack for physics-based simulation

    DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo. DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo physics. The MuJoCo Python bindings support three different OpenGL rendering backends: EGL (headless, hardware-accelerated), GLFW (windowed, hardware-accelerated), and OSMesa (purely software-based). At least one of these three backends must be available in order render through dm_control. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    BindsNET

    BindsNET

    Simulation of spiking neural networks (SNNs) using PyTorch

    A Python package used for simulating spiking neural networks (SNNs) on CPUs or GPUs using PyTorch Tensor functionality. BindsNET is a spiking neural network simulation library geared towards the development of biologically inspired algorithms for machine learning. This package is used as part of ongoing research on applying SNNs to machine learning (ML) and reinforcement learning (RL) problems in the Biologically Inspired Neural & Dynamical Systems (BINDS) lab.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Habitat-Lab

    Habitat-Lab

    A modular high-level library to train embodied AI agents

    ...Providing algorithms for single and multi-agent training (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), as well as tools to benchmark their performance on the defined tasks using standard metrics.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Retool your internal operations Icon
    Retool your internal operations

    Generate secure, production-grade apps that connect to your business data. Not just prototypes, but tools your team can actually deploy.

    Build internal software that meets enterprise security standards without waiting on engineering resources. Retool connects to your databases, APIs, and data sources while maintaining the permissions and controls you need. Create custom dashboards, admin tools, and workflows from natural language prompts—all deployed in your cloud with security baked in. Stop duct-taping operations together, start building in Retool.
    Build an app in Retool
  • 5
    Ray

    Ray

    A unified framework for scalable computing

    ...Accelerate your PyTorch and Tensorflow workload with a more resource-efficient and flexible distributed execution framework powered by Ray. Accelerate your hyperparameter search workloads with Ray Tune. Find the best model and reduce training costs by using the latest optimization algorithms. Deploy your machine learning models at scale with Ray Serve, a Python-first and framework agnostic model serving framework. Scale reinforcement learning (RL) with RLlib, a framework-agnostic RL library that ships with 30+ cutting-edge RL algorithms including A3C, DQN, and PPO. Easily build out scalable, distributed systems in Python with simple and composable primitives in Ray Core.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    Godot RL Agents

    Godot RL Agents

    An Open Source package that allows video game creators

    godot_rl_agents is a reinforcement learning integration for the Godot game engine. It allows AI agents to learn how to interact with and play Godot-based games using RL algorithms. The toolkit bridges Godot with Python-based RL libraries like Stable-Baselines3, making it possible to create complex and visually rich RL environments natively in Godot.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    MedicalGPT

    MedicalGPT

    MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training

    MedicalGPT training medical GPT model with ChatGPT training pipeline, implementation of Pretraining, Supervised Finetuning, Reward Modeling and Reinforcement Learning. MedicalGPT trains large medical models, including secondary pre-training, supervised fine-tuning, reward modeling, and reinforcement learning training.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Transformer Reinforcement Learning X

    Transformer Reinforcement Learning X

    A repo for distributed training of language models with Reinforcement

    trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset. Training support for Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b, EleutherAI/gpt-neox-20b, and google/flan-t5-xxl. For models beyond 20B parameters, trlX provides NVIDIA NeMo-backed trainers that leverage efficient parallelism techniques to scale effectively.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    CORL

    CORL

    High-quality single-file implementations of SOTA Offline

    CORL (Collection of Reinforcement Learning Environments for Control Tasks) is a modular and extensible set of high-quality reinforcement learning environments focused on continuous control and robotics. It aims to offer standardized environments suitable for benchmarking state-of-the-art RL algorithms in control tasks, including physics-based simulations and custom-designed scenarios.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Rezku Point of Sale Icon
    Rezku Point of Sale

    Designed for Real-World Restaurant Operations

    Rezku is an all-inclusive ordering platform and management solution for all types of restaurant and bar concepts. You can now get a fully custom branded downloadable smartphone ordering app for your restaurant exclusively from Rezku.
    Learn More
  • 10
    CleanRL

    CleanRL

    High-quality single file implementation of Deep Reinforcement Learning

    CleanRL is a Deep Reinforcement Learning library that provides high-quality single-file implementation with research-friendly features. The implementation is clean and simple, yet we can scale it to run thousands of experiments using AWS Batch. CleanRL is not a modular library and therefore it is not meant to be imported. At the cost of duplicate code, we make all implementation details of a DRL algorithm variant easy to understand, so CleanRL comes with its own pros and cons. You should consider using CleanRL if you want to 1) understand all implementation details of an algorithm's variant or 2) prototype advanced features that other modular DRL libraries do not support (CleanRL has minimal lines of code so it gives you great debugging experience and you don't have to do a lot of subclassing like sometimes in modular DRL libraries).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Gym

    Gym

    Toolkit for developing and comparing reinforcement learning algorithms

    ...Open source interface to reinforce learning tasks. The gym library provides an easy-to-use suite of reinforcement learning tasks. Gym provides the environment, you provide the algorithm. You can write your agent using your existing numerical computation library, such as TensorFlow or Theano. It makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano. The gym library is a collection of test problems — environments — that you can use to work out your reinforcement learning algorithms. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Machine Learning PyTorch Scikit-Learn

    Machine Learning PyTorch Scikit-Learn

    Code Repository for Machine Learning with PyTorch and Scikit-Learn

    ...For those who are interested in knowing what this book covers in general, I’d describe it as a comprehensive resource on the fundamental concepts of machine learning and deep learning. The first half of the book introduces readers to machine learning using scikit-learn, the defacto approach for working with tabular datasets. Then, the second half of this book focuses on deep learning, including applications to natural language processing and computer vision.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    Trax

    Trax

    Deep learning with clear code and speed

    Trax is an end-to-end library for deep learning that focuses on clear code and speed. It is actively used and maintained in the Google Brain team. Run a pre-trained Transformer, create a translator in a few lines of code. Features and resources, API docs, where to talk to us, how to open an issue and more. Walkthrough, how Trax works, how to make new models and train on your own data. Trax includes basic models (like ResNet, LSTM, Transformer) and RL algorithms (like REINFORCE, A2C, PPO). It...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    ReinventCommunity

    ReinventCommunity

    Jupyter Notebook tutorials for REINVENT 3.2

    This repository is a collection of useful jupyter notebooks, code snippets and example JSON files illustrating the use of Reinvent 3.2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Hands-on Unsupervised Learning

    Hands-on Unsupervised Learning

    Code for Hands-on Unsupervised Learning Using Python (O'Reilly Media)

    ...Unsupervised learning can be applied to unlabeled datasets to discover meaningful patterns buried deep in the data, patterns that may be near impossible for humans to uncover. Author Ankur Patel provides practical knowledge on how to apply unsupervised learning using two simple, production-ready Python frameworks - scikit-learn and TensorFlow. With the hands-on examples and code provided, you will identify difficult-to-find patterns in data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    ChainerRL

    ChainerRL

    ChainerRL is a deep reinforcement learning library

    ChainerRL (this repository) is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using Chainer, a flexible deep learning framework. PFRL is the PyTorch analog of ChainerRL. ChainerRL has a set of accompanying visualization tools in order to aid developers' ability to understand and debug their RL agents. With this visualization tool, the behavior of ChainerRL agents can be easily inspected from a browser UI. Environments that support the subset of OpenAI Gym's interface (reset and step methods) can be used.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Coach

    Coach

    Enables easy experimentation with state of the art algorithms

    ...Coach supports many state-of-the-art reinforcement learning algorithms, which are separated into three main classes - value optimization, policy optimization, and imitation learning. Coach supports a large number of environments which can be solved using reinforcement learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Rainbow

    Rainbow

    Rainbow: Combining Improvements in Deep Reinforcement Learning

    Combining improvements in deep reinforcement learning. Results and pretrained models can be found in the releases. Data-efficient Rainbow can be run using several options (note that the "unbounded" memory is implemented here in practice by manually setting the memory capacity to be the same as the maximum number of timesteps).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Easy-TensorFlow

    Easy-TensorFlow

    Simple and comprehensive tutorials in TensorFlow

    ...In addition to the aforementioned points, the large community of TensorFlow enriches the developers with the answer to almost all the questions one may encounter. Furthermore, since most of the developers are using TensorFlow for code development, having hands-on on TensorFlow is a necessity these days. Tensorboard is a powerful visualization suite that is developed to track both the network topology and performance, making debugging even simpler.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Intel neon

    Intel neon

    Intel® Nervana™ reference deep learning framework

    neon is Intel's reference deep learning framework committed to best performance on all hardware. Designed for ease of use and extensibility. See the new features in our latest release. We want to highlight that neon v2.0.0+ has been optimized for much better performance on CPUs by enabling Intel Math Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL that is used by neon is provided free of charge and downloaded automatically as part of the neon installation. The gpu...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    WikiSQL

    WikiSQL

    A large annotated semantic parsing corpus for developing NL interfaces

    A large crowd-sourced dataset for developing natural language interfaces for relational databases. WikiSQL is the dataset released along with our work Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning. Regarding tokenization and Stanza, when WikiSQL was written 3-years ago, it relied on Stanza, a CoreNLP python wrapper that has since been deprecated. If you'd still like to use the tokenizer, please use the docker image. We do not anticipate switching to the current Stanza as changes to the tokenizer would render the previous results not reproducible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    General purpose agents using reinforcement learning. Combines radial basis functions, temporal difference learning, planning, uncertainty estimations, and curiosity. Intended to be an out-of-the-box solution for roboticists and game developers.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next