Showing 138 open source projects for "training"

View related business solutions
  • 8 Monitoring Tools in One APM. Install in 5 Minutes. Icon
    8 Monitoring Tools in One APM. Install in 5 Minutes.

    Errors, performance, logs, uptime, hosts, anomalies, dashboards, and check-ins. One interface.

    AppSignal works out of the box for Ruby, Elixir, Node.js, Python, and more. 30-day free trial, no credit card required.
    Start Free
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 1
    Xtuner

    Xtuner

    A Next-Generation Training Engine Built for Ultra-Large MoE Models

    Xtuner is a large-scale training engine designed for efficient training and fine-tuning of modern large language models, particularly mixture-of-experts architectures. The framework focuses on enabling scalable training for extremely large models while maintaining efficiency across distributed computing environments. Unlike traditional 3D parallel training strategies, XTuner introduces optimized parallelism techniques that simplify scaling and reduce system complexity when training massive models. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    slime LLM

    slime LLM

    slime is an LLM post-training framework for RL Scaling

    slime is an open-source large language model (LLM) post-training framework developed to support reinforcement learning (RL)-based scaling and high-performance training workflows for advanced LLMs, blending training and rollout modules into an extensible system. It offers a flexible architecture that connects high-throughput training (e.g., via Megatron-LM) with a customizable data generation pipeline, enabling researchers and engineers to iterate on new RL training paradigms effectively. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    LLaMA-Factory

    LLaMA-Factory

    Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

    LLaMA-Factory is a fine-tuning and training framework for Meta's LLaMA language models. It enables researchers and developers to train and customize LLaMA models efficiently using advanced optimization techniques.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 4
    SwanLab

    SwanLab

    An open-source, modern-design AI training tracking and visualization

    SwanLab is an open-source experiment tracking and visualization platform designed to help machine learning engineers monitor, compare, and analyze the training of artificial intelligence models. The tool records training metrics, hyperparameters, model outputs, and experiment configurations so that developers can easily understand how different experiments perform over time. It provides a modern user interface for visualizing results, enabling teams to compare runs, track model performance trends, and collaborate on machine learning research. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 5
    ERNIE

    ERNIE

    The official repository for ERNIE 4.5 and ERNIEKit

    ERNIE is an open-source large-model toolkit and model family from the PaddlePaddle ecosystem that focuses on training, fine-tuning, compression, and practical application of ERNIE large language models. The repository positions ERNIEKit as an industrial-grade development toolkit, emphasizing end-to-end workflows that span high-performance pre-training, supervised fine-tuning, and alignment. It supports both full-parameter training and parameter-efficient approaches so teams can choose between maximum quality and lower-cost adaptation depending on their constraints. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    MiniMind

    MiniMind

    Train a 26M-parameter GPT from scratch in just 2h

    minimind is a framework that enables users to train a 26-million-parameter GPT (Generative Pre-trained Transformer) model from scratch in approximately two hours. It provides a streamlined process for data preparation, model training, and evaluation, making it accessible for individuals and organizations to develop their own language models without extensive computational resources.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    Coconut

    Coconut

    Training Large Language Model to Reason in a Continuous Latent Space

    Coconut is the official PyTorch implementation of the research paper “Training Large Language Models to Reason in a Continuous Latent Space.” The framework introduces a novel method for enhancing large language models (LLMs) with continuous latent reasoning steps, enabling them to generate and refine reasoning chains within a learned latent space rather than relying solely on discrete symbolic reasoning. It supports training across multiple reasoning paradigms—including standard Chain-of-Thought (CoT), no-thought, and hybrid configurations—using configurable training stages and latent representations. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    ...It employs Multi-head Latent Attention (MLA) and the DeepSeekMoE architecture to enhance computational efficiency. The model introduces an auxiliary-loss-free load balancing strategy and a multi-token prediction training objective to boost performance. Trained on 14.8 trillion diverse, high-quality tokens, DeepSeek-V3 underwent supervised fine-tuning and reinforcement learning to fully realize its capabilities. Evaluations indicate that it outperforms other open-source models and rivals leading closed-source models, achieving this with a training duration of 55 days on 2,048 Nvidia H800 GPUs, costing approximately $5.58 million.
    Downloads: 100 This Week
    Last Update:
    See Project
  • 9
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 10
    MedicalGPT

    MedicalGPT

    MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training

    MedicalGPT training medical GPT model with ChatGPT training pipeline, implementation of Pretraining, Supervised Finetuning, Reward Modeling and Reinforcement Learning. MedicalGPT trains large medical models, including secondary pre-training, supervised fine-tuning, reward modeling, and reinforcement learning training.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Happy-LLM

    Happy-LLM

    Large Language Model Principles and Practice Tutorial from Scratch

    ...The project guides learners through the entire conceptual and practical pipeline of modern LLM development, starting with foundational natural language processing concepts and gradually progressing to advanced architectures and training techniques. It explains the Transformer architecture, pre-training paradigms, and model scaling strategies while also providing hands-on coding examples so readers can implement and experiment with their own models. The tutorial emphasizes practical understanding by walking users through building and training small language models, including tokenizer construction, pre-training workflows, and fine-tuning methods.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    AReal

    AReal

    Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible

    AReaL is an open source, fully asynchronous reinforcement learning training system. AReal is designed for large reasoning and agentic models. It works with models that perform reasoning over multiple steps, agents interacting with environments. It is developed by the AReaL Team at Ant Group (inclusionAI) and builds upon the ReaLHF project. Release of training details, datasets, and models for reproducibility.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    MaxText

    MaxText

    A simple, performant and scalable Jax LLM

    ...MaxText includes ready-to-use configurations and reproducible training examples that help developers understand how to deploy large-scale AI workloads with modern machine learning infrastructure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    GeneralAI

    GeneralAI

    Large-scale Self-supervised Pre-training Across Tasks, Languages, etc.

    Fundamental research to develop new architectures for foundation models and AI, focusing on modeling generality and capability, as well as training stability and efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    dLLM

    dLLM

    dLLM: Simple Diffusion Language Modeling

    ...The project provides an integrated pipeline that standardizes how diffusion language models are trained, evaluated, and deployed, helping researchers reproduce experiments and compare results more easily. The framework includes scalable training infrastructure inspired by modern deep learning toolkits and supports integrations with widely used libraries for distributed training.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Magicoder

    Magicoder

    Empowering Code Generation with OSS-Instruct

    Magicoder is an open-source family of large language models designed specifically for code generation and software development tasks. The project focuses on improving the quality and diversity of code generation by training models with a novel dataset construction approach known as OSS-Instruct. This technique uses open-source code repositories as a foundation for generating more realistic and diverse instruction datasets for training language models. By grounding training data in real open-source examples, Magicoder aims to reduce bias and improve the reliability of code generation results compared to models trained solely on synthetic instructions. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    The Alignment Handbook

    The Alignment Handbook

    Robust recipes to align language models with human and AI preferences

    The Alignment Handbook is an open-source resource created to provide practical guidance for aligning large language models with human preferences and safety requirements. The project focuses on the post-training stage of model development, where models are refined after pre-training to behave more helpfully, safely, and reliably in real-world applications. It provides detailed training recipes that explain how to perform tasks such as supervised fine-tuning, preference modeling, and reinforcement learning from human feedback. The handbook also includes reproducible workflows for training instruction-following models and evaluating alignment quality across different datasets and benchmarks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    LLM Datasets

    LLM Datasets

    Curated list of datasets and tools for post-training

    ...The repository aims to make datasets easy to inspect and transform, with scripts for downloading, deduping, cleaning, and converting to formats like JSONL that slot into training pipelines. It highlights instruction-tuning and conversation-style corpora while also pointing to code, math, or domain-specific sets for targeted capabilities. Quality is a recurring theme: examples and utilities help filter low-value samples, enforce length limits, and split train/validation consistently so results are comparable. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    CodeGen

    CodeGen

    Open-source model for program synthesis

    ...CodeGen supports multi-turn program synthesis, meaning it can generate complex programs through a sequence of prompts that progressively refine the solution. The project also includes training infrastructure and model checkpoints that allow researchers to experiment with different model sizes and training configurations. Its architecture and training approach enable the models to perform competitively with proprietary coding models on benchmark tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    bitsandbytes

    bitsandbytes

    Accessible large language models via k-bit quantization for PyTorch

    ...The project includes specialized optimizers and quantized matrix operations that significantly reduce the memory footprint of training and inference workloads. By lowering the hardware requirements needed to work with large models, bitsandbytes helps make modern AI development more accessible to researchers and engineers. The library has become widely used in machine learning pipelines that rely on parameter-efficient training techniques and low-precision inference.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    Bespoke Curator

    Bespoke Curator

    Synthetic data curation for post-training and data extraction

    Curator is an open-source Python library designed to build synthetic data pipelines for training and evaluating machine learning models, particularly large language models. The system helps developers generate, transform, and curate high-quality datasets by combining automated generation with structured validation and filtering. It supports workflows where models are used to produce synthetic examples that can later be refined into reliable training datasets for reasoning, question answering, or structured information extraction tasks. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    RLHF-Reward-Modeling

    RLHF-Reward-Modeling

    Recipes to train reward model for RLHF

    RLHF-Reward-Modeling is an open-source research framework focused on training reward models used in reinforcement learning from human feedback for large language models. In RLHF pipelines, reward models are responsible for evaluating generated responses and assigning scores that guide the model toward outputs that better match human preferences. The repository provides training recipes and implementations for building reward and preference models using modern machine learning frameworks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Transformers & LLMs cheatsheet

    Transformers & LLMs cheatsheet

    VIP cheatsheet for Stanford's CME 295 Transformers and Large Language

    ...The project compiles structured notes, mathematical explanations, diagrams, and implementation references covering transformer internals, attention mechanisms, tokenization, training pipelines, and inference strategies. It is designed to help students and practitioners understand the technical foundations behind contemporary AI systems such as GPT-style models and multimodal architectures. The repository emphasizes conceptual clarity while still addressing practical engineering considerations involved in training and scaling transformer models. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    SWIFT LLM

    SWIFT LLM

    Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs

    SWIFT LLM is a comprehensive framework developed within the ModelScope ecosystem for training, fine-tuning, evaluating, and deploying large language models and multimodal models. The platform provides a full machine learning pipeline that supports tasks ranging from model pre-training to reinforcement learning alignment techniques. It integrates with popular inference engines such as vLLM and LMDeploy to accelerate deployment and runtime performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    EmoLLM

    EmoLLM

    Pre & Post-training & Dataset & Evaluation & Depoly & RAG

    ...The project is designed to help users through mental health conversations and has been fine-tuned from existing instruction-following LLMs rather than built as a base model from scratch. Its repository includes multiple model variants and training configurations spanning several underlying model families, including InternLM, Qwen, DeepSeek, Mixtral, LLaMA, and others, which shows that the initiative is structured as a broad ecosystem rather than a single release. The project also covers more than just model weights, with material for datasets, fine-tuning, evaluation, deployment, demos, RAG, and related subprojects such as its psychological digital assistant work.
    Downloads: 4 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB