6 projects for "train" with 2 filters applied:

  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 1
    DeepSpeed

    DeepSpeed

    Deep learning optimization library: makes distributed training easy

    DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for Deep Learning Training and Inference. With DeepSpeed you can: 1. Train/Inference dense or sparse models with billions or trillions of parameters 2. Achieve excellent system throughput and efficiently scale to thousands of GPUs 3. Train/Inference on resource constrained GPU systems 4. Achieve unprecedented low latency and high throughput for inference 5. Achieve extreme compression for an unparalleled inference latency and model size reduction with low costs DeepSpeed offers a confluence of system innovations, that has made large scale DL training effective, and efficient, greatly improved ease of use, and redefined the DL training landscape in terms of scale that is possible. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 2
    PyTorch3D

    PyTorch3D

    PyTorch3D is FAIR's library of reusable components for deep learning

    PyTorch3D is a comprehensive library for 3D deep learning that brings differentiable rendering, geometric operations, and 3D data structures into the PyTorch ecosystem. It’s designed to make it easy to build and train neural networks that work directly with 3D data such as meshes, point clouds, and implicit surfaces. The library provides fast GPU-accelerated implementations of rendering pipelines, transformations, rasterization, and lighting—making it possible to compute gradients through full 3D rendering processes. Researchers use it for tasks like shape generation, reconstruction, view synthesis, and visual reasoning. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    vJEPA-2

    vJEPA-2

    PyTorch code and models for VJEPA2 self-supervised learning from video

    ...This objective encourages the model to learn semantics, motion, and long-range structure without the shortcuts that pixel-level losses can invite. The architecture is designed to scale: spatiotemporal ViT backbones, flexible masking schedules, and efficient sampling let it train on long clips while remaining stable. Trained representations transfer well to downstream tasks such as action recognition, temporal localization, and video retrieval, often with simple linear probes or light fine-tuning. The repository typically includes end-to-end recipes—data pipelines, augmentation policies, training scripts, and evaluation harnesses.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    ...We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    deep-q-learning

    deep-q-learning

    Minimal Deep Q Learning (DQN & DDQN) implementations in Keras

    The deep-q-learning repository authored by keon provides a Python-based implementation of the Deep Q-Learning algorithm — a cornerstone method in reinforcement learning. It implements the core logic needed to train an agent using Q-learning with neural networks (i.e. approximating Q-values via deep nets), setting up environment interaction loops, experience replay, network updates, and policy behavior. For learners and researchers interested in reinforcement learning, this repo offers a concrete, runnable example bridging theory and practice: you can execute the code, play with hyperparameters, observe convergence behavior, and see how deep Q-learning learns policies over time in standard environments. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Learn_Deep_Learning_in_6_Weeks

    Learn_Deep_Learning_in_6_Weeks

    This is the Curriculum for "Learn Deep Learning in 6 Weeks"

    Learn_Deep_Learning_in_6_Weeks compresses an introductory deep learning curriculum into six weeks of structured learning and practice. It begins with neural network fundamentals and moves through convolutional and recurrent architectures, optimization strategies, regularization, and transfer learning. The materials emphasize code-first understanding: building small models, training them on accessible datasets, and analyzing their behavior. Each week culminates in a tangible outcome—such as a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB