19 projects for "python-dpkt" with 2 filters applied:

  • Cut Cloud Costs with Google Compute Engine Icon
    Cut Cloud Costs with Google Compute Engine

    Save up to 91% with Spot VMs and get automatic sustained-use discounts. One free VM per month, plus $300 in credits.

    Save on compute costs with Compute Engine. Reduce your batch jobs and workload bill 60-91% with Spot VMs. Compute Engine's committed use offers customers up to 70% savings through sustained use discounts. Plus, you get one free e2-micro VM monthly and $300 credit to start.
    Try Compute Engine
  • Easily Host LLMs and Web Apps on Cloud Run Icon
    Easily Host LLMs and Web Apps on Cloud Run

    Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

    Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
    Try Cloud Run Free
  • 1
    Open Model Zoo

    Open Model Zoo

    Pre-trained Deep Learning models and demos

    ...In addition to model files, Open Model Zoo provides demo applications that show realistic usage patterns and help developers quickly prototype and understand inference pipelines in C++, Python, or via the OpenCV Graph API. Tools in the repository also help automate model downloads and other tasks, making it easier to incorporate these models into production systems or custom solutions.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 2
    AudioCraft

    AudioCraft

    Audiocraft is a library for audio processing and generation

    ...Both models operate over discrete audio tokens produced by a neural codec (EnCodec), which acts like a tokenizer for waveforms and enables efficient sequence modeling. The repo provides inference scripts, checkpoints, and simple Python APIs so you can generate clips from prompts or incorporate the models into applications. It also contains training code and recipes, so researchers can fine-tune on custom data or explore new objectives without building infrastructure from scratch. Example notebooks, CLI tools, and audio utilities help with prompt design, conditioning on reference audio, and post-processing to produce ready-to-share outputs.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 3
    The Hypersim Dataset

    The Hypersim Dataset

    Photorealistic Synthetic Dataset for Holistic Indoor Scene

    Hypersim is a large-scale, photorealistic synthetic dataset and tooling suite for indoor scene understanding research. It provides richly annotated renderings—RGB, depth, surface normals, instance and semantic segmentations, and material/lighting metadata—produced from high-fidelity virtual environments. The dataset spans diverse furniture layouts, room types, and camera trajectories, enabling robust training for geometry, segmentation, and SLAM-adjacent tasks. Rendering pipelines and...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    VibeTensor

    VibeTensor

    Our first fully AI generated deep learning system

    ...What makes VibeTensor remarkable is that every major component, from core libraries and dispatch systems to CUDA runtime support, caching allocators, and language bindings, was created and validated by coding agents using automated builds and tests rather than manual line-by-line human coding. The system includes both a Python frontend via a torch-like API and an experimental Node.js/TypeScript interface.
    Downloads: 0 This Week
    Last Update:
    See Project
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 5
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    PyTorch3D

    PyTorch3D

    PyTorch3D is FAIR's library of reusable components for deep learning

    PyTorch3D is a comprehensive library for 3D deep learning that brings differentiable rendering, geometric operations, and 3D data structures into the PyTorch ecosystem. It’s designed to make it easy to build and train neural networks that work directly with 3D data such as meshes, point clouds, and implicit surfaces. The library provides fast GPU-accelerated implementations of rendering pipelines, transformations, rasterization, and lighting—making it possible to compute gradients through...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    DeepSpeed

    DeepSpeed

    Deep learning optimization library: makes distributed training easy

    DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for Deep Learning Training and Inference. With DeepSpeed you can: 1. Train/Inference dense or sparse models with billions or trillions of parameters 2. Achieve excellent system throughput and efficiently scale to thousands of GPUs 3. Train/Inference on resource constrained GPU systems 4. Achieve unprecedented low latency and high throughput for inference 5. Achieve extreme...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    Make-A-Video - Pytorch (wip)

    Make-A-Video - Pytorch (wip)

    Implementation of Make-A-Video, new SOTA text to video generator

    Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch. They combine pseudo-3d convolutions (axial convolutions) and temporal attention and show much better temporal fusion. The pseudo-3d convolutions isn't a new concept. It has been explored before in other contexts, say for protein contact prediction as "dimensional hybrid residual networks". The gist of the paper comes down to, take a SOTA text-to-image model (here they use DALL-E2, but the same learning...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    JEPA

    JEPA

    PyTorch code and models for V-JEPA self-supervised learning from video

    JEPA (Joint-Embedding Predictive Architecture) captures the idea of predicting missing high-level representations rather than reconstructing pixels, aiming for robust, scalable self-supervised learning. A context encoder ingests visible regions and predicts target embeddings for masked regions produced by a separate target encoder, avoiding low-level reconstruction losses that can overfit to texture. This makes learning focus on semantics and structure, yielding features that transfer well...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Cut Data Warehouse Costs up to 54% with BigQuery Icon
    Cut Data Warehouse Costs up to 54% with BigQuery

    Migrate from Snowflake, Databricks, or Redshift with free migration tools. Exabyte scale without the Exabyte price.

    BigQuery delivers up to 54% lower TCO than cloud alternatives. Migrate from legacy or competing warehouses using free BigQuery Migration Service with automated SQL translation. Get serverless scale with no infrastructure to manage, compressed storage, and flexible pricing—pay per query or commit for deeper discounts. New customers get $300 in free credit.
    Try BigQuery Free
  • 10
    DLRM

    DLRM

    An implementation of a deep learning recommendation model (DLRM)

    DLRM (Deep Learning Recommendation Model) is Meta’s open-source reference implementation for large-scale recommendation systems built to handle extremely high-dimensional sparse features and embedding tables. The architecture combines dense (MLP) and sparse (embedding) branches, then interacts features via dot product or feature interactions before passing through further dense layers to predict click-through, ranking scores, or conversion probabilities. The implementation is optimized for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    vJEPA-2

    vJEPA-2

    PyTorch code and models for VJEPA2 self-supervised learning from video

    VJEPA2 is a next-generation self-supervised learning framework for video that extends the “predict in representation space” idea from i-JEPA to the temporal domain. Instead of reconstructing pixels, it predicts the missing high-level embeddings of masked space-time regions using a context encoder and a slowly updated target encoder. This objective encourages the model to learn semantics, motion, and long-range structure without the shortcuts that pixel-level losses can invite. The...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    The Open Source Computer Vision Library has >2500 algorithms, extensive documentation and sample code for real-time computer vision. It works on Windows, Linux, Mac OS X, Android, iOS in your browser through JavaScript. Languages: C++, Python, Julia, Javascript Homepage: https://opencv.org Q&A forum: https://forum.opencv.org/ Documentation: https://docs.opencv.org Source code: https://github.com/opencv Please pay special attention to our tutorials! https://docs.opencv.org/master Books about the OpenCV are described here: https://opencv.org/books.html
    Leader badge
    Downloads: 1,413 This Week
    Last Update:
    See Project
  • 13
    MLPACK is a C++ machine learning library with emphasis on scalability, speed, and ease-of-use. Its aim is to make machine learning possible for novice users by means of a simple, consistent API, while simultaneously exploiting C++ language features to provide maximum performance and flexibility for expert users. * More info + downloads: https://mlpack.org * Git repo: https://github.com/mlpack/mlpack
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    Deep Learning Papers Reading Roadmap

    Deep Learning Papers Reading Roadmap

    Deep Learning papers reading roadmap for anyone who are eager to learn

    Deep Learning Papers Reading Roadmap is a widely known curated reading plan for deep learning that helps newcomers and practitioners navigate the vast literature in a structured and intentional way. It is built around several guiding principles: moving from outline to detail, from older foundational papers to state-of-the-art work, and from generic to more specialized areas while keeping a focus on impactful contributions. The roadmap organizes papers into categories such as fundamentals,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    deep-q-learning

    deep-q-learning

    Minimal Deep Q Learning (DQN & DDQN) implementations in Keras

    The deep-q-learning repository authored by keon provides a Python-based implementation of the Deep Q-Learning algorithm — a cornerstone method in reinforcement learning. It implements the core logic needed to train an agent using Q-learning with neural networks (i.e. approximating Q-values via deep nets), setting up environment interaction loops, experience replay, network updates, and policy behavior.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    TensorFlow Course

    TensorFlow Course

    Simple and ready-to-use tutorials for TensorFlow

    This repository houses a highly popular (~16k stars) set of TensorFlow tutorials and example code aimed at beginners and intermediate users. It includes Jupyter notebooks and scripts that cover neural network fundamentals, model training, deployment, and more, with support for Google Colab.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    LearningToCompare_FSL

    LearningToCompare_FSL

    Learning to Compare: Relation Network for Few-Shot Learning

    LearningToCompare_FSL is a PyTorch implementation of the “Learning to Compare: Relation Network for Few-Shot Learning” paper, focusing on the few-shot learning experiments described in that work. The core idea implemented here is the relation network, which learns to compare pairs of feature embeddings and output relation scores that indicate whether two images belong to the same class, enabling classification from only a handful of labeled examples. The repository provides training and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Deep Reinforcement Learning TensorFlow

    Deep Reinforcement Learning TensorFlow

    TensorFlow implementation of Deep Reinforcement Learning papers

    Deep Reinforcement Learning TensorFlow is a comprehensive TensorFlow codebase that implements several foundational deep reinforcement learning algorithms for educational and experimental use. The repository focuses on clarity and modularity so users can study how different RL approaches are built and compare their behavior across environments. It includes implementations of well-known algorithms such as Deep Q-Networks (DQN), policy gradients, and related variants, demonstrating how neural...
    Downloads: 5 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB