Showing 841 open source projects for "python q learning"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Fully managed relational database service for MySQL, PostgreSQL, and SQL Server Icon
    Fully managed relational database service for MySQL, PostgreSQL, and SQL Server

    Focus on your application, and leave the database to us

    Cloud SQL manages your databases so you don't have to, so your business can run without disruption. It automates all your backups, replication, patches, encryption, and storage capacity increases to give your applications the reliability, scalability, and security they need.
    Try for free
  • 1
    TradeMaster

    TradeMaster

    TradeMaster is an open-source platform for quantitative trading

    TradeMaster is a first-of-its-kind, best-in-class open-source platform for quantitative trading (QT) empowered by reinforcement learning (RL), which covers the full pipeline for the design, implementation, evaluation and deployment of RL-based algorithms. TradeMaster is composed of 6 key modules: 1) multi-modality market data of different financial assets at multiple granularities; 2) whole data preprocessing pipeline; 3) a series of high-fidelity data-driven market simulators for mainstream...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 2
    LM Human Preferences

    LM Human Preferences

    Code for the paper Fine-Tuning Language Models from Human Preferences

    lm-human-preferences is the official OpenAI codebase that implements the method from the paper Fine-Tuning Language Models from Human Preferences. Its purpose is to show how to align language models with human judgments by training a reward model from human comparisons and then fine-tuning a policy model using that reward signal. The repository includes scripts to train the reward model (learning to rank or score pairs of outputs), and to fine-tune a policy (a language model) with...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    texturize

    texturize

    Generate photo-realistic textures based on source images

    Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture. A command-line tool and Python library to automatically generate new textures similar to a source image or photograph. It's useful in the context of computer graphics if you want to make variations on a theme or expand the size of an existing texture. This software is powered by deep learning technology, using a combination of convolution networks and example-based optimization to synthesize images. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    PARL

    PARL

    A high-performance distributed training framework

    PARL is a scalable reinforcement learning framework built on top of PaddlePaddle. It focuses on modularity and ease of use, supporting distributed training and a variety of RL algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • HOA Software Icon
    HOA Software

    Smarter Community Management Starts Here

    Simplify HOA management with software that handles everything from financials to communication.
    Learn More
  • 5
    VALL-E

    VALL-E

    PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)

    We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems....
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    NanoDet-Plus

    NanoDet-Plus

    Lightweight anchor-free object detection model

    Super fast and high accuracy lightweight anchor-free object detection model. Real-time on mobile devices. NanoDet is a FCOS-style one-stage anchor-free object detection model which using Generalized Focal Loss as classification and regression loss. In NanoDet-Plus, we propose a novel label assignment strategy with a simple assign guidance module (AGM) and a dynamic soft label assigner (DSLA) to solve the optimal label assignment problem in lightweight model training. We also introduce a...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 7
    Keras Attention Mechanism

    Keras Attention Mechanism

    Attention mechanism Implementation for Keras

    Many-to-one attention mechanism for Keras. We demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two LSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number of parameters for a fair comparison (250K). The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the top represents the attention map and the bottom the ground truth. As the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    PIFuHD

    PIFuHD

    High-Resolution 3D Human Digitization from A Single Image

    PIFuHD (Pixel-Aligned Implicit Function for 3D human reconstruction at high resolution) is a method and codebase to reconstruct high-fidelity 3D human meshes from a single image. It extends prior PIFu work by increasing resolution and detail, enabling fine geometry in cloth folds, hair, and subtle surface features. The method operates by learning an implicit occupancy / surface function conditioned on the image and camera projection; at inference time it queries dense points to reconstruct a...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 9
    ElegantRL

    ElegantRL

    Massively Parallel Deep Reinforcement Learning

    ElegantRL is an efficient and flexible deep reinforcement learning framework designed for researchers and practitioners. It focuses on simplicity, high performance, and supporting advanced RL algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Turn more customers into advocates. Icon
    Turn more customers into advocates.

    Fight skyrocketing paid media costs by turning your customers into a primary vehicle for acquisition, awareness, and activation with Extole.

    The platform's advanced capabilities ensure companies get the most out of their referral programs. Leverage custom events, profiles, and attributes to enable dynamic, audience-specific referral experiences. Use first-party data to tailor customer segment messaging, rewards, and engagement strategies. Use our flexible APIs to build management capabilities and consumer experiences–headlessly or hybrid. We have all the tools you need to build scalable, secure, and high-performing referral programs.
    Learn More
  • 10
    FFCV

    FFCV

    Fast Forward Computer Vision (and other ML workloads!)

    ffcv is a drop-in data loading system that dramatically increases data throughput in model training. From gridding to benchmarking to fast research iteration, there are many reasons to want faster model training. Below we present premade codebases for training on ImageNet and CIFAR, including both (a) extensible codebases and (b) numerous premade training configurations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    auto-sklearn

    auto-sklearn

    Automated machine learning with scikit-learn

    auto-sklearn is an automated machine learning toolkit and a drop-in replacement for a scikit-learn estimator. auto-sklearn frees a machine learning user from algorithm selection and hyperparameter tuning. It leverages recent advantages in Bayesian optimization, meta-learning and ensemble construction. Auto-sklearn 2.0 includes latest research on automatically configuring the AutoML system itself and contains a multitude of improvements which speed up the fitting the AutoML system....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Merlion

    Merlion

    A Machine Learning Framework for Time Series Intelligence

    Merlion is a Python library for time series intelligence. It provides an end-to-end machine learning framework that includes loading and transforming data, building and training models, post-processing model outputs, and evaluating model performance. It supports various time series learning tasks, including forecasting, anomaly detection, and change point detection for both univariate and multivariate time series.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    BERTScore

    BERTScore

    BERT score for text generation

    Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020). We now support about 130 models (see this spreadsheet for their correlations with human evaluation). Currently, the best model is Microsoft/debate-large-online, please consider using it instead of the default roberta-large in order to have the best correlation with human evaluation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    ManimML

    ManimML

    ManimML is a project focused on providing animations

    ManimML is a project focused on providing animations and visualizations of common machine-learning concepts with the Manim Community Library. Please check out our paper. We want this project to be a compilation of primitive visualizations that can be easily combined to create videos about complex machine-learning concepts. Additionally, we want to provide a set of abstractions that allow users to focus on explanations instead of software engineering.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DeepFaceLive

    DeepFaceLive

    Real-time face swap for PC streaming or video calls

    You can swap your face from a webcam or the face in the video using trained face models. There is also a Face Animator module in DeepFaceLive app. You can control a static face picture using video or your own face from the camera. The quality is not the best, and requires fine face matching and tuning parameters for every face pair, but enough for funny videos and memes or real-time streaming at 25 fps using 35 TFLOPS GPU.
    Downloads: 301 This Week
    Last Update:
    See Project
  • 17
    Sockeye

    Sockeye

    Sequence-to-sequence framework, focused on Neural Machine Translation

    Sockeye is an open-source sequence-to-sequence framework for Neural Machine Translation built on PyTorch. It implements distributed training and optimized inference for state-of-the-art models, powering Amazon Translate and other MT applications. For a quickstart guide to training a standard NMT model on any size of data, see the WMT 2014 English-German tutorial. If you are interested in collaborating or have any questions, please submit a pull request or issue. You can also send questions...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Alpa

    Alpa

    Training and serving large-scale neural networks

    Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    CodeContests

    CodeContests

    Large dataset of coding contests designed for AI and ML model training

    CodeContests, developed by Google DeepMind, is a large-scale competitive programming dataset designed for training and evaluating machine learning models on code generation and problem solving. This dataset played a central role in the development of AlphaCode, DeepMind’s model for solving programming problems at a human-competitive level, as published in Science. CodeContests aggregates problems and human-written solutions from multiple programming competition platforms, including AtCoder,...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    Compose

    Compose

    A machine learning tool for automated prediction engineering

    Compose is a machine learning tool for automated prediction engineering. It allows you to structure prediction problems and generate labels for supervised learning. An end user defines an outcome of interest by writing a labeling function, then runs a search to automatically extract training examples from historical data. Its result is then provided to Featuretools for automated feature engineering and subsequently to EvalML for automated machine learning. Prediction problems are structured...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ConvNeXt V2

    ConvNeXt V2

    Code release for ConvNeXt V2 model

    ConvNeXt V2 is an evolution of the ConvNeXt architecture that co-designs convolutional networks alongside self-supervised learning. The V2 version introduces a fully convolutional masked autoencoder (FCMAE) framework where parts of the image are masked and the network reconstructs the missing content, marrying convolutional inductive bias with powerful pretraining. A key innovation is a new Global Response Normalization (GRN) layer added to the ConvNeXt backbone, which enhances feature...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Petastorm

    Petastorm

    Petastorm library enables single machine or distributed training

    ...Petastorm supports popular Python-based machine learning (ML) frameworks such as Tensorflow, PyTorch, and PySpark. It can also be used from pure Python code. A dataset created using Petastorm is stored in Apache Parquet format. On top of a Parquet schema, petastorm also stores higher-level schema information that makes multidimensional arrays into a native part of a petastorm dataset.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    UnionML

    UnionML

    Build and deploy machine learning microservices

    Creating ML apps should be simple and frictionless. UnionML is an open-source Python framework built on top of Flyte™, unifying the complex ecosystem of ML tools into a single interface. Combine the tools that you love using a simple, standardized API so you can stop writing so much boilerplate and focus on what matters: the data and the models that learn from them. Fit the rich ecosystem of tools and frameworks into a common protocol for machine learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Karate Club

    Karate Club

    An API Oriented Open-source Python Framework for Unsupervised Learning

    Karate Club is an unsupervised machine learning extension library for NetworkX. Karate Club consists of state-of-the-art methods to do unsupervised learning on graph-structured data. To put it simply it is a Swiss Army knife for small-scale graph mining research. First, it provides network embedding techniques at the node and graph level. Second, it includes a variety of overlapping and non-overlapping community detection methods. Implemented methods cover a wide range of network science...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    d2l-zh

    d2l-zh

    Chinese-language edition of Dive into Deep Learning

    d2l‑zh is the Chinese-language edition of Dive into Deep Learning, an interactive, open‑source deep learning textbook that combines code, math, and explanatory text. It features runnable Jupyter notebooks compatible with multiple frameworks (e.g., PyTorch, MXNet, TensorFlow), comprehensive theoretical analysis, and exercises. Widely adopted in over 70 countries and used by more than 500 universities for teaching deep learning.
    Downloads: 0 This Week
    Last Update:
    See Project