Showing 890 open source projects for "training"

View related business solutions
  • Fully Managed MySQL, PostgreSQL, and SQL Server Icon
    Fully Managed MySQL, PostgreSQL, and SQL Server

    Automatic backups, patching, replication, and failover. Focus on your app, not your database.

    Cloud SQL handles your database ops end to end, so you can focus on your app.
    Try Free
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 1
    MMGeneration

    MMGeneration

    MMGeneration is a powerful toolkit for generative models

    ...MMGeneration is a powerful toolkit for generative models, especially for GANs now. It is based on PyTorch and MMCV. The master branch works with PyTorch 1.5+. We currently support training on Unconditional GANs, Internal GANs, and Image Translation Models. Support for conditional models will come soon. A plentiful toolkit containing multiple applications in GANs is provided to users. GAN interpolation, GAN projection, and GAN manipulations are integrated into our framework. It's time to play with your GANs! For the highly dynamic training in generative models, we adopt a new way to train dynamic models with MMDDP. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Simple LLM Finetuner

    Simple LLM Finetuner

    Simple UI for LLM Model Finetuning

    ...In addition to training, the platform provides inference capabilities so users can immediately test and evaluate their fine-tuned models within the same environment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    ClassyVision

    ClassyVision

    An end-to-end PyTorch framework for image and video classification

    ...It offers high performance and scalability—capable of training models like ResNet-50 on ImageNet in just minutes—while remaining accessible to both researchers and production engineers. The library integrates seamlessly with PyTorch Hub for easy access to pretrained models and supports elastic training using PyTorch Elastic, making distributed training robust to changes in cluster resources or hardware failures.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Hyperformer

    Hyperformer

    Hypergraph Transformer for Skeleton-based Action Recognition

    ...By defining a graph with joints as vertices and their natural connections as edges, previous works successfully adopted Graph Convolutional networks (GCNs) to model joint co-occurrences and achieved superior performance. More recently, a limitation of GCNs is identified, i.e., the topology is fixed after training. To relax such a restriction, Self-Attention (SA) mechanism has been adopted to make the topology of GCNs adaptive to the input, resulting in the state-of-the-art hybrid models. Concurrently, attempts with plain Transformers have also been made, but they still lag behind state-of-the-art GCN-based methods due to the lack of structural prior.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 5
    Alpa

    Alpa

    Training and serving large-scale neural networks

    Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 6
    VALL-E

    VALL-E

    PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)

    ...Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems. VALL-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that VALL-E significantly outperforms the state-of-the-art zero-shot TTS system in terms of speech naturalness and speaker similarity. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    sense2vec

    sense2vec

    Contextually-keyed word vectors

    sense2vec (Trask et. al, 2015) is a nice twist on word2vec that lets you learn more interesting and detailed word vectors. This library is a simple Python implementation for loading, querying and training sense2vec models. For more details, check out our blog post. To explore the semantic similarities across all Reddit comments of 2015 and 2019, see the interactive demo.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 8
    PARL

    PARL

    A high-performance distributed training framework

    PARL is a scalable reinforcement learning framework built on top of PaddlePaddle. It focuses on modularity and ease of use, supporting distributed training and a variety of RL algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    ToMe (Token Merging)

    ToMe (Token Merging)

    A method to increase the speed and lower the memory footprint

    ...ToMe integrates seamlessly into existing transformer models such as DeiT, MAE, SWAG, and timm ViTs, offering 2–3x speedups during inference and substantial efficiency gains during training. The method can be applied dynamically at inference time or incorporated into training for improved performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Secure File Transfer for Windows with Cerberus by Redwood Icon
    Secure File Transfer for Windows with Cerberus by Redwood

    Protect and share files over FTP/S, SFTP, HTTPS and SCP with the #1 rated Windows file transfer server.

    Cerberus supports unlimited users and connections on a single IP, with built-in encryption, 2FA, and a browser-based web client — all deployable in under 15 minutes with a 25-day free trial.
    Try for Free
  • 10
    LM Human Preferences

    LM Human Preferences

    Code for the paper Fine-Tuning Language Models from Human Preferences

    lm-human-preferences is the official OpenAI codebase that implements the method from the paper Fine-Tuning Language Models from Human Preferences. Its purpose is to show how to align language models with human judgments by training a reward model from human comparisons and then fine-tuning a policy model using that reward signal. The repository includes scripts to train the reward model (learning to rank or score pairs of outputs), and to fine-tune a policy (a language model) with reinforcement learning (or related techniques) guided by that reward model. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Paddle Quantum

    Paddle Quantum

    Paddle Quantum

    Paddle Quantum (量桨) is the world's first cloud-integrated quantum machine learning platform based on Baidu PaddlePaddle. It supports the building and training of quantum neural networks, making PaddlePaddle the first deep-learning framework in China. Paddle Quantum is feature-rich and easy to use. It provides comprehensive API documentation and tutorials help users get started right away. Paddle Quantum aims at establishing a bridge between artificial intelligence (AI) and quantum computing (QC). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    picoGPT

    picoGPT

    An unnecessarily tiny implementation of GPT-2 in NumPy

    ...It allows users to understand how tokenization, transformer layers, attention mechanisms, and autoregressive text generation operate in modern large language models. The project uses a small amount of code to illustrate the essential mathematical operations involved in training and running a transformer-based neural network. Because the code is intentionally lightweight, it is often used as a teaching resource for students learning about natural language processing and deep learning architectures. Developers can explore the repository to understand how language models generate text and how transformer components interact within the architecture.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    minimalRL-pytorch

    minimalRL-pytorch

    Implementations of basic RL algorithms with minimal lines of codes

    minimalRL is a lightweight reinforcement learning repository that implements several classic algorithms using minimal PyTorch code. The project is designed primarily as an educational resource that demonstrates how reinforcement learning algorithms work internally without the complexity of large frameworks. Each algorithm implementation is contained within a single file and typically ranges from about 100 to 150 lines of code, making it easy for learners to inspect the entire implementation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    OptiMate

    OptiMate

    Libraries for optimizing AI models, inference speed, and GPU usage

    ...It groups several internal optimization tools developed by Nebuly AI into a single repository that focuses on improving inference speed, reducing infrastructure usage, and streamlining model training workflows. Its modules help developers automatically apply optimization techniques that better align AI models with the capabilities of the underlying hardware such as GPUs and CPUs. One of the core components, Speedster, focuses on accelerating model inference by applying state of the art optimization techniques to increase performance while lowering operational costs. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 15
    FFCV

    FFCV

    Fast Forward Computer Vision (and other ML workloads!)

    ffcv is a drop-in data loading system that dramatically increases data throughput in model training. From gridding to benchmarking to fast research iteration, there are many reasons to want faster model training. Below we present premade codebases for training on ImageNet and CIFAR, including both (a) extensible codebases and (b) numerous premade training configurations.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    PIFuHD

    PIFuHD

    High-Resolution 3D Human Digitization from A Single Image

    ...It also uses a two-stage architecture: a coarse global model followed by local refinement patches to capture fine detail, balancing global consistency and local detail. The repo includes training pipelines, dataset loaders (for Multi-POP, etc.), and inference scripts for mesh output including depth maps for postprocessing. To help practical use, there are utilities for normal estimation, texture back-projection, mesh cleanup, and integration with rendering pipelines.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 17
    Stable-Dreamfusion

    Stable-Dreamfusion

    Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion

    ...Different from Imagen, Stable-Diffusion is a latent diffusion model, which diffuses in a latent space instead of the original image space. Therefore, we need the loss to propagate back from the VAE's encoder part too, which introduces extra time costs in training. We use the multi-resolution grid encoder to implement the NeRF backbone (implementation from torch-ngp), which enables much faster rendering.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    ChatGenTitle

    ChatGenTitle

    A paper title generation model fine-tuned on the LLaMA model

    ChatGenTitle: A paper title generation model fine-tuned on the LLaMA model using information from millions of arXiv papers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Keras Attention Mechanism

    Keras Attention Mechanism

    Attention mechanism Implementation for Keras

    ...We consider two LSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number of parameters for a fair comparison (250K). The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the top represents the attention map and the bottom the ground truth. As the training progresses, the model learns the task and the attention map converges to the ground truth. We consider many 1D sequences of the same length. The task is to find the maximum of each sequence. We give the full sequence processed by the RNN layer to the attention layer. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    smclarify

    smclarify

    Fairness aware machine learning. Bias detection and mitigation

    ...A facet can have value(s) that designates that sample as "sensitive". Bias detection and mitigation for datasets and models. The label is a column or feature which is the target for training a machine learning model. The label can have value(s) that designates that sample as having a "positive" outcome. A bias measure is a function that returns a bias metric. A bias metric is a numerical value indicating the level of bias detected as determined by a particular bias measure. A collection of bias metrics for a given dataset or a combination of a dataset and model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    NanoDet-Plus

    NanoDet-Plus

    Lightweight anchor-free object detection model

    ...In NanoDet-Plus, we propose a novel label assignment strategy with a simple assign guidance module (AGM) and a dynamic soft label assigner (DSLA) to solve the optimal label assignment problem in lightweight model training. We also introduce a light feature pyramid called Ghost-PAN to enhance multi-layer feature fusion. These improvements boost previous NanoDet's detection accuracy by 7 mAP on COCO dataset. NanoDet provide multi-backend C++ demo including ncnn, OpenVINO and MNN. There is also an Android demo based on ncnn library. Supports various backends including ncnn, MNN and OpenVINO. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 23
    Sockeye

    Sockeye

    Sequence-to-sequence framework, focused on Neural Machine Translation

    Sockeye is an open-source sequence-to-sequence framework for Neural Machine Translation built on PyTorch. It implements distributed training and optimized inference for state-of-the-art models, powering Amazon Translate and other MT applications. For a quickstart guide to training a standard NMT model on any size of data, see the WMT 2014 English-German tutorial. If you are interested in collaborating or have any questions, please submit a pull request or issue. You can also send questions to sockeye-dev-at-amazon-dot-com. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    TradeMaster

    TradeMaster

    TradeMaster is an open-source platform for quantitative trading

    TradeMaster is a first-of-its-kind, best-in-class open-source platform for quantitative trading (QT) empowered by reinforcement learning (RL), which covers the full pipeline for the design, implementation, evaluation and deployment of RL-based algorithms. TradeMaster is composed of 6 key modules: 1) multi-modality market data of different financial assets at multiple granularities; 2) whole data preprocessing pipeline; 3) a series of high-fidelity data-driven market simulators for mainstream...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    ArtLine

    ArtLine

    Deep learning tool that converts portrait photos into line art

    ArtLine is a deep learning-based project focused on generating high-quality line art portraits from input images. It leverages neural network techniques built on top of the fastai library and PyTorch to transform photographic portraits into stylized line drawings. ArtLine is trained using datasets such as APDrawing and anime sketch colorization pairs to better understand facial structures and artistic line representation. An extended version integrates ControlNet, allowing users to guide the...
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB