Showing 876 open source projects for "training"

View related business solutions
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    ERNIE

    ERNIE

    The official repository for ERNIE 4.5 and ERNIEKit

    ERNIE is an open-source large-model toolkit and model family from the PaddlePaddle ecosystem that focuses on training, fine-tuning, compression, and practical application of ERNIE large language models. The repository positions ERNIEKit as an industrial-grade development toolkit, emphasizing end-to-end workflows that span high-performance pre-training, supervised fine-tuning, and alignment. It supports both full-parameter training and parameter-efficient approaches so teams can choose between maximum quality and lower-cost adaptation depending on their constraints. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    AReal

    AReal

    Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible

    AReaL is an open source, fully asynchronous reinforcement learning training system. AReal is designed for large reasoning and agentic models. It works with models that perform reasoning over multiple steps, agents interacting with environments. It is developed by the AReaL Team at Ant Group (inclusionAI) and builds upon the ReaLHF project. Release of training details, datasets, and models for reproducibility.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    Opacus

    Opacus

    Training PyTorch models with differential privacy

    Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment. Vectorized per-sample gradient computation that is 10x faster than micro batching. Supports most types of PyTorch models and can be used with minimal modification to the original neural network. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    MaxText

    MaxText

    A simple, performant and scalable Jax LLM

    ...MaxText includes ready-to-use configurations and reproducible training examples that help developers understand how to deploy large-scale AI workloads with modern machine learning infrastructure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    Feast

    Feast

    Feature Store for Machine Learning

    ...This ensure that future feature values do not leak to models during training. Decouple ML from data infrastructure by providing a single data access layer that abstracts feature storage from feature retrieval, ensuring models remain portable as you move from training models to serving models, from batch model
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    MLPerf

    MLPerf

    Reference implementations of MLPerf™ training benchmarks

    This is a repository of reference implementations for the MLPerf training benchmarks. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware. Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Unsloth-MLX

    Unsloth-MLX

    Bringing the Unsloth experience to Mac users via Apple's MLX framework

    ...Users can write and test training pipelines directly on macOS before scaling up, accelerating development cycles and lowering entry barriers for model refinement.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    rLLM

    rLLM

    Democratizing Reinforcement Learning for LLMs

    ...The project is designed to support large-scale language models (including support for big models via integrated training backends), making it relevant for state-of-the-art research and production use. The framework includes tools for defining workflows, specifying objectives or reward functions, and managing training/policy updates across possibly distributed settings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Happy-LLM

    Happy-LLM

    Large Language Model Principles and Practice Tutorial from Scratch

    ...The project guides learners through the entire conceptual and practical pipeline of modern LLM development, starting with foundational natural language processing concepts and gradually progressing to advanced architectures and training techniques. It explains the Transformer architecture, pre-training paradigms, and model scaling strategies while also providing hands-on coding examples so readers can implement and experiment with their own models. The tutorial emphasizes practical understanding by walking users through building and training small language models, including tokenizer construction, pre-training workflows, and fine-tuning methods.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 10
    CoreNet

    CoreNet

    CoreNet: A library for training deep neural networks

    CoreNet is Apple’s internal deep learning framework for distributed neural network training, designed for high scalability, low-latency communication, and strong hardware efficiency. It focuses on enabling large-scale model training across clusters of GPUs and accelerators by optimizing data flow and parallelism strategies. CoreNet provides abstractions for data, tensor, and pipeline parallelism, allowing models to scale without code duplication or heavy manual configuration. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    GeneralAI

    GeneralAI

    Large-scale Self-supervised Pre-training Across Tasks, Languages, etc.

    Fundamental research to develop new architectures for foundation models and AI, focusing on modeling generality and capability, as well as training stability and efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    FramePack

    FramePack

    Lets make video diffusion practical

    FramePack explores compact representations for sequences of image frames, targeting tasks where many near-duplicate frames carry redundant information. The idea is to “pack” frames by detecting shared structure and storing differences efficiently, which can accelerate training or inference on video-like data. By reducing I/O and memory bandwidth, datasets become lighter to load while models still see the essential temporal variation. The repository demonstrates both packing and unpacking steps, making it straightforward to integrate into preprocessing pipelines. It’s useful for diffusion and generative models that learn from sequential image datasets, as well as classical pipelines that batch many related frames. ...
    Downloads: 14 This Week
    Last Update:
    See Project
  • 13
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Magicoder

    Magicoder

    Empowering Code Generation with OSS-Instruct

    Magicoder is an open-source family of large language models designed specifically for code generation and software development tasks. The project focuses on improving the quality and diversity of code generation by training models with a novel dataset construction approach known as OSS-Instruct. This technique uses open-source code repositories as a foundation for generating more realistic and diverse instruction datasets for training language models. By grounding training data in real open-source examples, Magicoder aims to reduce bias and improve the reliability of code generation results compared to models trained solely on synthetic instructions. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    The Alignment Handbook

    The Alignment Handbook

    Robust recipes to align language models with human and AI preferences

    The Alignment Handbook is an open-source resource created to provide practical guidance for aligning large language models with human preferences and safety requirements. The project focuses on the post-training stage of model development, where models are refined after pre-training to behave more helpfully, safely, and reliably in real-world applications. It provides detailed training recipes that explain how to perform tasks such as supervised fine-tuning, preference modeling, and reinforcement learning from human feedback. The handbook also includes reproducible workflows for training instruction-following models and evaluating alignment quality across different datasets and benchmarks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    SageMaker Python SDK

    SageMaker Python SDK

    Training and deploying machine learning models on Amazon SageMaker

    SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. With the SDK, you can train and deploy models using popular deep learning frameworks Apache MXNet and TensorFlow. You can also train and deploy models with Amazon algorithms, which are scalable implementations of core machine learning algorithms that are optimized for SageMaker and GPU training.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    higgsfield

    higgsfield

    Fault-tolerant, highly scalable GPU orchestration

    Higgsfield is an open-source, fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters, such as Large Language Models (LLMs).
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    Humanoid-Gym

    Humanoid-Gym

    Reinforcement Learning for Humanoid Robot with Zero-Shot Sim2Real

    ...The system is built on top of NVIDIA Isaac Gym, which allows large-scale parallel simulation of robotic environments directly on GPU hardware. Its primary goal is to enable efficient training of humanoid robots in simulation while enabling policies to transfer effectively to real-world hardware without additional training. The framework emphasizes the concept of zero-shot sim-to-real transfer, meaning that behaviors learned in simulation can be deployed directly on physical robots with minimal adjustment. To improve reliability and generalization, the framework also includes sim-to-sim validation pipelines that test trained policies across different physics engines.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    SWIFT LLM

    SWIFT LLM

    Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs

    SWIFT LLM is a comprehensive framework developed within the ModelScope ecosystem for training, fine-tuning, evaluating, and deploying large language models and multimodal models. The platform provides a full machine learning pipeline that supports tasks ranging from model pre-training to reinforcement learning alignment techniques. It integrates with popular inference engines such as vLLM and LMDeploy to accelerate deployment and runtime performance.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    Simple StyleGan2 for Pytorch

    Simple StyleGan2 for Pytorch

    Simplest working implementation of Stylegan2

    ...You can also specify the location where intermediate results and model checkpoints should be stored. You can increase the network capacity (which defaults to 16) to improve generation results, at the cost of more memory. By default, if the training gets cut off, it will automatically resume from the last checkpointed file. Once you have finished training, you can generate images from your latest checkpoint. If a previous checkpoint contained a better generator, (which often happens as generators start degrading towards the end of training), you can load from a previous checkpoint with another flag. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Koila

    Koila

    Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code

    Koila is a lightweight Python library designed to help developers avoid memory errors when training deep learning models with PyTorch. The library introduces a lazy evaluation mechanism that delays computation until it is actually required, allowing the framework to better estimate the memory requirements of a model before execution. By building a computational graph first and executing operations only when necessary, koila reduces the risk of running out of GPU memory during the forward pass of neural network training. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Pyreft

    Pyreft

    ReFT: Representation Finetuning for Language Models

    PyreFT is a tool by Stanford NLP for fine-tuning transformer models with an emphasis on efficient, resource-conserving training and customizability for NLP tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Watermark-Removal

    Watermark-Removal

    Machine learning image inpainting task that removes watermarks

    ...Through these techniques, the model learns to identify regions of the image affected by the watermark and generate realistic replacements for the missing visual information. The repository contains code for preprocessing images, training the model, and running inference on images to automatically remove watermark artifacts.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 25
    SimpleTuner

    SimpleTuner

    A general fine-tuning kit geared toward image/video/audio diffusion

    ...The system includes configuration-driven training processes that allow users to define datasets, model paths, and training parameters with minimal setup. SimpleTuner also emphasizes experimentation and academic collaboration, encouraging contributions and iterative improvements from the open-source community.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB