Showing 137 open source projects for "learning"

View related business solutions
  • Add Two Lines of Code. Get Full APM. Icon
    Add Two Lines of Code. Get Full APM.

    AppSignal installs in minutes and auto-configures dashboards, alerts, and error tracking.

    Works out of the box for Rails, Django, Express, Phoenix, and more. Monitoring exceptions and performance in no time.
    Start Free
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 1
    Prompt in-context learning

    Prompt in-context learning

    Resources for in-context learning and prompt engineering

    Prompt-In-Context-Learning is an open-source repository that serves as a comprehensive engineering guide and curated resource collection for understanding and applying in-context learning and prompt engineering with large language models. The project gathers research papers, tutorials, prompt examples, and practical guides that help developers and researchers learn how to design effective prompts for models such as GPT-3, ChatGPT, and other foundation models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Anomaly Detection Learning Resources

    Anomaly Detection Learning Resources

    Anomaly detection related books, papers, videos, and toolboxes

    Anomaly Detection Learning Resources is a curated open-source repository that collects educational materials, tools, and academic references related to anomaly detection and outlier analysis in data science. The project serves as a centralized index for researchers and practitioners who want to explore algorithms, datasets, and publications associated with detecting unusual patterns in data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    NBA Sports Betting Machine Learning

    NBA Sports Betting Machine Learning

    NBA sports betting using machine learning

    NBA-Machine-Learning-Sports-Betting is an open-source Python project that applies machine learning techniques to predict outcomes of National Basketball Association games for analytical and betting-related research. The system gathers historical team statistics and game data spanning multiple seasons, beginning with the 2007–2008 NBA season and continuing through the present.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    ML Retreat

    ML Retreat

    Machine Learning Journal for Intermediate to Advanced Topics

    ML Retreat is an open-source learning repository that serves as a structured journal documenting advanced topics in machine learning and artificial intelligence. The project compiles detailed notes, technical explanations, and curated resources that guide readers through complex concepts across modern AI research. Rather than functioning as a traditional tutorial series, the repository is organized as a learning journey that progressively explores increasingly advanced subjects. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    ...The model employs a Mixture of Experts (MoE) architecture, comprising 671 billion total parameters with 37 billion active parameters per token, and supports a context length of up to 128,000 tokens. DeepSeek-R1's training regimen uniquely integrates large-scale reinforcement learning (RL) without relying on supervised fine-tuning, enabling the model to develop advanced reasoning capabilities. This approach has resulted in performance comparable to leading models like OpenAI's o1, while maintaining cost-efficiency. To further support the research community, DeepSeek has released distilled versions of the model based on architectures such as LLaMA and Qwen.
    Downloads: 137 This Week
    Last Update:
    See Project
  • 6
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    ...The model introduces an auxiliary-loss-free load balancing strategy and a multi-token prediction training objective to boost performance. Trained on 14.8 trillion diverse, high-quality tokens, DeepSeek-V3 underwent supervised fine-tuning and reinforcement learning to fully realize its capabilities. Evaluations indicate that it outperforms other open-source models and rivals leading closed-source models, achieving this with a training duration of 55 days on 2,048 Nvidia H800 GPUs, costing approximately $5.58 million.
    Downloads: 87 This Week
    Last Update:
    See Project
  • 7
    MedicalGPT

    MedicalGPT

    MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training

    MedicalGPT training medical GPT model with ChatGPT training pipeline, implementation of Pretraining, Supervised Finetuning, Reward Modeling and Reinforcement Learning. MedicalGPT trains large medical models, including secondary pre-training, supervised fine-tuning, reward modeling, and reinforcement learning training.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    H2O LLM Studio

    H2O LLM Studio

    Framework and no-code GUI for fine-tuning LLMs

    Welcome to H2O LLM Studio, a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell. With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Autonomous Agents

    Autonomous Agents

    Autonomous Agents (LLMs) research papers. Updated Daily

    Autonomous-Agents is a research-focused repository that collects implementations, experiments, and academic resources related to autonomous multi-agent systems and intelligent robotics. The project explores how multiple agents can cooperate and interact with complex environments through machine learning, imitation learning, and multimodal sensing. It includes frameworks that integrate visual perception, tactile sensing, and spatial reasoning to guide the actions of robotic agents during manipulation or collaborative tasks. One of the central concepts explored in the repository is the integration of different sensory modalities using advanced machine learning techniques such as Feature-wise Linear Modulation and graph-based attention mechanisms. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 10
    PyTorch-Tutorial-2nd

    PyTorch-Tutorial-2nd

    CV, NLP, LLM project applications, and advanced engineering deployment

    PyTorch-Tutorial-2nd is an open-source educational repository that provides structured tutorials for learning deep learning with the PyTorch framework. The project serves as a practical companion to a second edition of a PyTorch learning guide and is designed to help learners understand neural network concepts through hands-on coding examples. The repository covers a wide range of topics including tensor operations, neural network construction, model training workflows, and optimization strategies. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    TTRL

    TTRL

    Test-Time Reinforcement Learning

    TTRL is an open-source framework for test-time reinforcement learning in large language models, with a particular focus on reasoning tasks where ground-truth labels are not available during inference. The project addresses the problem of how to generate useful reward signals from unlabeled test-time data, and its central insight is that common test-time scaling practices such as majority voting can be repurposed into reward estimates for online reinforcement learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    AgentGuide

    AgentGuide

    AI Agent Development Guide, LangGraph in Action, Advanced RAG

    AgentGuide is an open-source learning resource designed to provide a structured pathway for understanding and building AI agents. The project aggregates tutorials, research papers, frameworks, and practical resources related to agent development with large language models. Instead of presenting scattered resources, the repository organizes them into a systematic learning roadmap that guides learners from foundational concepts to advanced AI agent systems.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    UCCL

    UCCL

    UCCL is an efficient communication library for GPUs

    UCCL is a high-performance GPU communication library designed to support distributed machine learning workloads and large-scale AI systems. The library focuses on enabling efficient data transfer and collective communication between GPUs during training and inference processes. It supports a variety of communication patterns including collective operations such as all-reduce as well as peer-to-peer transfers that are commonly used in modern machine learning architectures. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    RLHF-Reward-Modeling

    RLHF-Reward-Modeling

    Recipes to train reward model for RLHF

    RLHF-Reward-Modeling is an open-source research framework focused on training reward models used in reinforcement learning from human feedback for large language models. In RLHF pipelines, reward models are responsible for evaluating generated responses and assigning scores that guide the model toward outputs that better match human preferences. The repository provides training recipes and implementations for building reward and preference models using modern machine learning frameworks. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    SWIFT LLM

    SWIFT LLM

    Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs

    SWIFT LLM is a comprehensive framework developed within the ModelScope ecosystem for training, fine-tuning, evaluating, and deploying large language models and multimodal models. The platform provides a full machine learning pipeline that supports tasks ranging from model pre-training to reinforcement learning alignment techniques. It integrates with popular inference engines such as vLLM and LMDeploy to accelerate deployment and runtime performance. The framework also includes support for many modern training strategies, including preference learning methods and parameter-efficient fine-tuning techniques. ms-swift is designed to work with hundreds of language and multimodal models, providing a unified environment for experimentation and production deployment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    LLPlayer

    LLPlayer

    The media player for language learning, with dual subtitles

    LLPlayer is an open-source media player designed specifically for language learning through video content. Unlike traditional media players, the application focuses on advanced subtitle-related features that help learners understand and interact with foreign language media more effectively. The player supports dual subtitles so users can simultaneously view text in both the original language and their native language while watching videos.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 18
    llms-from-scratch-cn

    llms-from-scratch-cn

    Build a large language model from 0 only with Python foundation

    ...Through a collection of notebooks, code examples, and translated learning materials, users can explore how to implement components such as multi-head attention, data loaders, and training pipelines using Python and PyTorch.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    AIDE ML

    AIDE ML

    AI-Driven Exploration in the Space of Code

    AIDE ML is an open-source research framework designed to explore automated machine learning development through agent-based search and code optimization. The project implements the AIDE algorithm, which uses a tree-search strategy guided by large language models to iteratively generate, evaluate, and refine code. Instead of relying on manual experimentation, the agent autonomously drafts machine learning pipelines, debugs errors, and benchmarks performance against user-defined evaluation metrics. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    AgentEvolver

    AgentEvolver

    Towards Efficient Self-Evolving Agent System

    ...The system focuses on improving the efficiency and scalability of training autonomous agents by allowing them to generate tasks, explore environments, and refine strategies without heavy reliance on manually curated datasets. Its architecture combines reinforcement learning with LLM-driven reasoning mechanisms to guide exploration and learning. The framework introduces several key mechanisms, including self-questioning to create new learning tasks, self-navigating to improve exploration through experience reuse, and self-attributing to assign rewards based on the usefulness of actions. These mechanisms enable agents to continuously improve their capabilities while interacting with complex environments and tools. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ReCall

    ReCall

    Learning to Reason with Search for LLMs via Reinforcement Learning

    ...Instead of relying purely on static knowledge stored inside the model, ReCall allows the language model to dynamically decide when it should retrieve information or invoke external capabilities during the reasoning process. The framework uses reinforcement learning to train models to perform these tool calls effectively while solving multi-step reasoning tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    PRIME

    PRIME

    Scalable RL solution for advanced reasoning of language models

    PRIME is an open-source reinforcement learning framework designed to improve the reasoning capabilities of large language models through process-level rewards rather than relying only on final outputs. The system introduces the concept of process reinforcement through implicit rewards, allowing models to receive feedback on intermediate reasoning steps instead of evaluating only the final answer.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Daily Interview

    Daily Interview

    Datawhale members have compiled a book covering machine learning

    ...Many of the problems include explanations, references, or links to additional learning resources that help users study relevant theory and improve problem-solving skills. The project is particularly useful for developers preparing for coding interviews at technology companies, as it provides a continuous stream of practice material organized by topic and difficulty level.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    bitsandbytes

    bitsandbytes

    Accessible large language models via k-bit quantization for PyTorch

    ...The library has become widely used in machine learning pipelines that rely on parameter-efficient training techniques and low-precision inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    LLMs-Zero-to-Hero

    LLMs-Zero-to-Hero

    From nobody to big model (LLM) hero

    LLMs-Zero-to-Hero is an open-source educational project designed to guide learners through the complete process of understanding and building large language models from the ground up. The repository presents a structured learning pathway that begins with fundamental concepts in machine learning and progresses toward advanced topics such as model pre-training, fine-tuning, and deployment. Rather than relying entirely on existing frameworks, the project encourages readers to implement important components themselves in order to gain a deeper understanding of how modern language models work internally. ...
    Downloads: 1 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB