• Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    More flexibility. More control.

    Generate interest, access liquidity without selling, and execute trades seamlessly. All in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 1
    LLMs-from-scratch

    LLMs-from-scratch

    Implement a ChatGPT-like LLM in PyTorch from scratch, step by step

    LLMs-from-scratch is an educational codebase that walks through implementing modern large-language-model components step by step. It emphasizes building blocks—tokenization, embeddings, attention, feed-forward layers, normalization, and training loops—so learners understand not just how to use a model but how it works internally. The repository favors clear Python and NumPy or PyTorch implementations that can be run and modified without heavyweight frameworks obscuring the logic. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    llms-from-scratch-cn

    llms-from-scratch-cn

    Build a large language model from 0 only with Python foundation

    llms-from-scratch-cn is an educational open-source project designed to teach developers how to build large language models step by step using practical code and conceptual explanations. The repository provides a hands-on learning path that begins with the fundamentals of natural language processing and gradually progresses toward implementing full GPT-style architectures from the ground up.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    MiniMind

    MiniMind

    Train a 26M-parameter GPT from scratch in just 2h

    minimind is a framework that enables users to train a 26-million-parameter GPT (Generative Pre-trained Transformer) model from scratch in approximately two hours. It provides a streamlined process for data preparation, model training, and evaluation, making it accessible for individuals and organizations to develop their own language models without extensive computational resources.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    SimpleLLM

    SimpleLLM

    950 line, minimal, extensible LLM inference engine built from scratch

    SimpleLLM is a minimal, extensible large language model inference engine implemented in roughly 950 lines of code, built from scratch to serve both as a learning tool and a research platform for novel inference techniques. It provides the core components of an LLM runtime—such as tokenization, batching, and asynchronous execution—without the abstraction overhead of more complex engines, making it easier for developers and researchers to understand and modify.
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 5
    Happy-LLM

    Happy-LLM

    Large Language Model Principles and Practice Tutorial from Scratch

    Happy-LLM is an open-source educational project created by the Datawhale AI community that provides a structured and comprehensive tutorial for understanding and building large language models from scratch. The project guides learners through the entire conceptual and practical pipeline of modern LLM development, starting with foundational natural language processing concepts and gradually progressing to advanced architectures and training techniques. It explains the Transformer architecture, pre-training paradigms, and model scaling strategies while also providing hands-on coding examples so readers can implement and experiment with their own models. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    EmoLLM

    EmoLLM

    Pre & Post-training & Dataset & Evaluation & Depoly & RAG

    ...The project is designed to help users through mental health conversations and has been fine-tuned from existing instruction-following LLMs rather than built as a base model from scratch. Its repository includes multiple model variants and training configurations spanning several underlying model families, including InternLM, Qwen, DeepSeek, Mixtral, LLaMA, and others, which shows that the initiative is structured as a broad ecosystem rather than a single release. The project also covers more than just model weights, with material for datasets, fine-tuning, evaluation, deployment, demos, RAG, and related subprojects such as its psychological digital assistant work.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Nano-vLLM

    Nano-vLLM

    A lightweight vLLM implementation built from scratch

    Nano-vLLM is a lightweight implementation of the vLLM inference engine designed to run large language models efficiently while maintaining a minimal and readable codebase. The project recreates the core functionality of vLLM in a simplified architecture written in approximately a thousand lines of Python, making it easier for developers and researchers to understand how modern LLM inference systems work. Despite its compact design, nano-vllm incorporates advanced optimization techniques such...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    tiny-llm

    tiny-llm

    A course of learning LLM inference serving on Apple Silicon

    tiny-llm is an educational open-source project designed to teach system engineers how large language model inference and serving systems work by building them from scratch. The project is structured as a guided course that walks developers through the process of implementing the core components required to run a modern language model, including attention mechanisms, token generation, and optimization techniques. Rather than relying on high-level machine learning frameworks, the codebase uses mostly low-level array and matrix manipulation APIs so that developers can understand exactly how model inference works internally. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    TTRL

    TTRL

    Test-Time Reinforcement Learning

    ...The repository is implemented on top of the verl ecosystem, which allows users to enable TTRL as part of an existing reinforcement learning workflow rather than building a new stack from scratch.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 10
    LLMs-Zero-to-Hero

    LLMs-Zero-to-Hero

    From nobody to big model (LLM) hero

    LLMs-Zero-to-Hero is an open-source educational project designed to guide learners through the complete process of understanding and building large language models from the ground up. The repository presents a structured learning pathway that begins with fundamental concepts in machine learning and progresses toward advanced topics such as model pre-training, fine-tuning, and deployment. Rather than relying entirely on existing frameworks, the project encourages readers to implement...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    LLM Foundry

    LLM Foundry

    LLM training code for MosaicML foundation models

    Introducing MPT-7B, the first entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Large language models (LLMs) are changing the world, but for those outside well-resourced industry labs, it can be extremely difficult to train and deploy these models. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    Skywork-R1V4

    Skywork-R1V4

    Skywork-R1V is an advanced multimodal AI model series

    ...The project introduces a model architecture that transfers the reasoning abilities of advanced text-based models into visual domains so the system can interpret images and perform multi-step reasoning about them. Instead of retraining both language and vision models from scratch, the framework uses a lightweight visual projection layer that connects a pretrained vision backbone with a reasoning-capable language model. This design allows the model to analyze images while maintaining strong textual reasoning performance, enabling tasks such as solving visual math problems, interpreting scientific diagrams, and answering questions about images.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    handy-ollama

    handy-ollama

    Implement CPU from scratch and play with large model deployments

    handy-ollama is an open-source educational project designed to help developers and AI enthusiasts learn how to deploy and run large language models locally using the Ollama platform. The repository serves as a structured tutorial that explains how to install, configure, and use Ollama to run modern language models on personal hardware without requiring advanced infrastructure. A key focus of the project is enabling users to run large models even without GPUs by leveraging optimized CPU-based...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    LLaMA

    LLaMA

    Inference code for Llama models

    ...This repo is a core piece of the Llama model infrastructure, used by researchers and developers to run LLaMA models locally or in their infrastructure. It is meant for inference (not training from scratch) and connects with aspects like model cards, responsible use, licensing, etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    ...We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB