Deep Learning Frameworks for BSD

Browse free open source Deep Learning Frameworks and projects for BSD below. Use the toggles on the left to filter open source Deep Learning Frameworks by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Say goodbye to broken revenue funnels and poor customer experiences Icon
    Say goodbye to broken revenue funnels and poor customer experiences

    Connect and coordinate your data, signals, tools, and people at every step of the customer journey.

    LeanData is a Demand Management solution that supports all go-to-market strategies such as account-based sales development, geo-based territories, and more. LeanData features a visual, intuitive workflow native to Salesforce that enables users to view their entire lead flow in one interface. LeanData allows users to access the drag-and-drop feature to route their leads. LeanData also features an algorithms match that uses multiple fields in Salesforce.
    Learn More
  • 1
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    The Open Source Computer Vision Library has >2500 algorithms, extensive documentation and sample code for real-time computer vision. It works on Windows, Linux, Mac OS X, Android, iOS in your browser through JavaScript. Languages: C++, Python, Julia, Javascript Homepage: https://opencv.org Q&A forum: https://forum.opencv.org/ Documentation: https://docs.opencv.org Source code: https://github.com/opencv Please pay special attention to our tutorials! https://docs.opencv.org/master Books about the OpenCV are described here: https://opencv.org/books.html
    Leader badge
    Downloads: 3,541 This Week
    Last Update:
    See Project
  • 2
    MATLAB Deep Learning Model Hub

    MATLAB Deep Learning Model Hub

    Discover pretrained models for deep learning in MATLAB

    Discover pre-trained models for deep learning in MATLAB. Pretrained image classification networks have already learned to extract powerful and informative features from natural images. Use them as a starting point to learn a new task using transfer learning. Inputs are RGB images, the output is the predicted label and score.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    Requests for Research

    Requests for Research

    A living collection of deep learning problems

    Requests for Research is an OpenAI repository that collects and organizes open research ideas in artificial intelligence. It is structured as a curated list of project proposals, challenges, and exploratory directions suggested by OpenAI researchers for the broader community. Each request highlights a specific problem area, often with context, motivation, and possible approaches, serving as inspiration for independent researchers, students, and practitioners. The repository is intended to foster collaboration and accelerate progress by sharing promising questions that OpenAI itself may not have the resources to fully pursue. While the ideas are not guaranteed to be unique or unexplored, they reflect areas OpenAI believes would benefit from more investigation. The project functions as a living document of research directions rather than a codebase, making it an accessible entry point for those looking to contribute to the advancement of AI.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 4
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 3 This Week
    Last Update:
    See Project
  • WinMan ERP Software Icon
    WinMan ERP Software

    For companies of all sizes and enterprises in need of a solution to improve their operations

    WinMan ERP is an all-encompassing solution designed to manage the operational, quality, commercial, and financial processes of manufacturers and distributors. It is particularly well-suited for companies embracing Lean strategies.
    Learn More
  • 5
    MatlabFunc

    MatlabFunc

    Matlab codes for feature learning

    MatlabFunc is a collection of MATLAB functions developed by the ZJULearning group to support various tasks in computer vision, machine learning, and numerical computation. The repository brings together a wide range of utility scripts, algorithms, and implementations that serve as building blocks for research and development. These functions cover areas such as matrix operations, optimization, data processing, and visualization, making them broadly applicable across different research domains. The project is intended to provide reusable and adaptable MATLAB code that can save time for researchers and students working on experimental or applied projects. By consolidating these tools in one place, MatlabFunc serves as a practical reference and toolkit for both academic and engineering purposes. Contributions and improvements from the community are encouraged, allowing the repository to grow into a richer resource over time.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    Resume-Matcher

    Resume-Matcher

    Improve your resumes with Resume Matcher

    Resume-Matcher is a command-line application that compares resumes against job descriptions using natural language processing. It provides a compatibility score based on keyword relevance and highlights areas where the resume aligns—or doesn't—with the target role. Designed for job seekers and HR professionals, it helps improve resume tailoring and streamlines candidate screening.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    Deep-Learning-Interview-Book

    Deep-Learning-Interview-Book

    Interview guide for machine learning, mathematics, and deep learning

    Deep-Learning-Interview-Book collects structured notes, Q&A, and concept summaries tailored to deep-learning interviews, turning scattered study into a coherent playbook. It spans the core math (linear algebra, probability, optimization) and the practitioner topics candidates actually face, like CNNs, RNNs/Transformers, attention, regularization, and training tricks. Explanations emphasize intuition first, then key formulas and common pitfalls, so you can reason through unseen questions rather than memorize trivia. Many entries connect theory to implementation details, including how choices in activation, initialization, or normalization affect convergence and stability. The content is organized for fast review before an interview loop but is also deep enough for systematic study over weeks. Because it’s text-first and modular, it works equally well as a quick refresher or a backbone for a full study plan.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    DeepSpeed

    DeepSpeed

    Deep learning optimization library: makes distributed training easy

    DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for Deep Learning Training and Inference. With DeepSpeed you can: 1. Train/Inference dense or sparse models with billions or trillions of parameters 2. Achieve excellent system throughput and efficiently scale to thousands of GPUs 3. Train/Inference on resource constrained GPU systems 4. Achieve unprecedented low latency and high throughput for inference 5. Achieve extreme compression for an unparalleled inference latency and model size reduction with low costs DeepSpeed offers a confluence of system innovations, that has made large scale DL training effective, and efficient, greatly improved ease of use, and redefined the DL training landscape in terms of scale that is possible. These innovations such as ZeRO, 3D-Parallelism, DeepSpeed-MoE, ZeRO-Infinity, etc. fall under the training pillar.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Exposure Correction

    Exposure Correction

    Learning multi-scale deep model correcting over- and under- exposed

    Exposure_Correction is a research project that provides the implementation for the paper Learning Multi-Scale Photo Exposure Correction (CVPR 2021). The repository focuses on correcting poorly exposed photographs, handling both underexposure and overexposure using a deep learning approach. The method employs a multi-scale framework that learns to enhance images by adjusting exposure levels across different spatial resolutions. This allows the model to preserve fine details while correcting global lighting inconsistencies. The repository includes pre-trained models, datasets, and training/testing code to enable reproducibility and experimentation. By leveraging this framework, researchers and developers can apply exposure correction to a wide range of natural images, improving visual quality without manual editing. The project serves both as a research reference and a practical tool for computational photography and image enhancement.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Rezku Point of Sale Icon
    Rezku Point of Sale

    Designed for Real-World Restaurant Operations

    Rezku is an all-inclusive ordering platform and management solution for all types of restaurant and bar concepts. You can now get a fully custom branded downloadable smartphone ordering app for your restaurant exclusively from Rezku.
    Learn More
  • 10
    Make-A-Video - Pytorch (wip)

    Make-A-Video - Pytorch (wip)

    Implementation of Make-A-Video, new SOTA text to video generator

    Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch. They combine pseudo-3d convolutions (axial convolutions) and temporal attention and show much better temporal fusion. The pseudo-3d convolutions isn't a new concept. It has been explored before in other contexts, say for protein contact prediction as "dimensional hybrid residual networks". The gist of the paper comes down to, take a SOTA text-to-image model (here they use DALL-E2, but the same learning points would easily apply to Imagen), make a few minor modifications for attention across time and other ways to skimp on the compute cost, do frame interpolation correctly, get a great video model out. Passing in images (if one were to pretrain on images first), both temporal convolution and attention will be automatically skipped. In other words, you can use this straightforwardly in your 2d Unet and then port it over to a 3d Unet once that phase of the training is done.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    AudioCraft

    AudioCraft

    Audiocraft is a library for audio processing and generation

    AudioCraft is a PyTorch library for text-to-audio and text-to-music generation, packaging research models and tooling for training and inference. It includes MusicGen for music generation conditioned on text (and optionally melody) and AudioGen for text-conditioned sound effects and environmental audio. Both models operate over discrete audio tokens produced by a neural codec (EnCodec), which acts like a tokenizer for waveforms and enables efficient sequence modeling. The repo provides inference scripts, checkpoints, and simple Python APIs so you can generate clips from prompts or incorporate the models into applications. It also contains training code and recipes, so researchers can fine-tune on custom data or explore new objectives without building infrastructure from scratch. Example notebooks, CLI tools, and audio utilities help with prompt design, conditioning on reference audio, and post-processing to produce ready-to-share outputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    DLRM

    DLRM

    An implementation of a deep learning recommendation model (DLRM)

    DLRM (Deep Learning Recommendation Model) is Meta’s open-source reference implementation for large-scale recommendation systems built to handle extremely high-dimensional sparse features and embedding tables. The architecture combines dense (MLP) and sparse (embedding) branches, then interacts features via dot product or feature interactions before passing through further dense layers to predict click-through, ranking scores, or conversion probabilities. The implementation is optimized for performance at scale, supporting multi-GPU and multi-node execution, quantization, embedding partitioning, and pipelined I/O to feed huge embeddings efficiently. It includes data loaders for standard benchmarks (like Criteo), training scripts, evaluation tools, and capabilities like mixed precision, gradient compression, and memory fusion to maximize throughput.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Deep Learning Is Nothing

    Deep Learning Is Nothing

    Deep learning concepts in an approachable style

    Deep-Learning-Is-Nothing presents deep learning concepts in an approachable, from-scratch style that demystifies the stack behind modern models. It typically begins with linear algebra, calculus, and optimization refreshers before moving to perceptrons, multilayer networks, and gradient-based training. Implementations favor small, readable examples—often NumPy first—to show how forward and backward passes work without depending solely on high-level frameworks. Once the fundamentals are clear, the material extends to CNNs, RNNs, and attention mechanisms, explaining why each architecture suits particular tasks. Practical sections cover data pipelines, regularization, and evaluation, emphasizing reproducibility and debugging techniques. The goal is to replace buzzwords with intuition so learners can reason about architectures and training dynamics with confidence.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Deep Learning Models

    Deep Learning Models

    A collection of various deep learning architectures, models, and tips

    This repository collects clear, well-documented implementations of deep learning models and training utilities written by Sebastian Raschka. The code favors readability and pedagogy: components are organized so you can trace data flow through layers, losses, optimizers, and evaluation. Examples span fundamental architectures—MLPs, CNNs, RNN/Transformers—and practical tasks like image classification or text modeling. Reproducible training scripts and configuration files make it straightforward to rerun experiments or adapt them to your own datasets. The repo often pairs implementations with notes on design choices and trade-offs, turning it into both a toolbox and a learning resource. It’s suitable for students, researchers prototyping ideas, and practitioners who want clean baselines before adding complexity.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Deeplearning.ai

    Deeplearning.ai

    Study notes, summaries, and auxiliary materials for deep learning

    Deeplearning.ai collects study notes, summaries, and auxiliary materials aligned with the popular deep learning course series many learners take early in their AI journey. It distills core ideas such as optimization, regularization, convolutional networks, sequence models, and practical training tricks. The explanations aim to bridge theory and practice, often connecting mathematical intuition to code-level implications. By organizing the content as “books” or structured notes, it gives students a consistent reference to revisit as models and tooling evolve. Many learners use it to supplement course videos, reinforcing concepts before implementing assignments or projects. As a consolidated guide, it reduces context-switching and helps build a durable mental model of deep learning fundamentals.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    JEPA

    JEPA

    PyTorch code and models for V-JEPA self-supervised learning from video

    JEPA (Joint-Embedding Predictive Architecture) captures the idea of predicting missing high-level representations rather than reconstructing pixels, aiming for robust, scalable self-supervised learning. A context encoder ingests visible regions and predicts target embeddings for masked regions produced by a separate target encoder, avoiding low-level reconstruction losses that can overfit to texture. This makes learning focus on semantics and structure, yielding features that transfer well with simple linear probes and minimal fine-tuning. The repository provides training recipes, data pipelines, and evaluation utilities for image JEPA variants and often includes ablations that illuminate which masking and architectural choices matter. Because the objective is non-autoregressive and operates in embedding space, JEPA tends to be compute-efficient and stable at scale. The approach has become a strong alternative to contrastive or pixel-reconstruction methods for representation learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Learn_Deep_Learning_in_6_Weeks

    Learn_Deep_Learning_in_6_Weeks

    This is the Curriculum for "Learn Deep Learning in 6 Weeks"

    Learn_Deep_Learning_in_6_Weeks compresses an introductory deep learning curriculum into six weeks of structured learning and practice. It begins with neural network fundamentals and moves through convolutional and recurrent architectures, optimization strategies, regularization, and transfer learning. The materials emphasize code-first understanding: building small models, training them on accessible datasets, and analyzing their behavior. Each week culminates in a tangible outcome—such as a working classifier or sequence model—so progress is visible and motivating. The plan also introduces practical considerations like GPU usage, checkpoints, and debugging training dynamics. It aims to give you enough breadth to recognize common patterns and enough depth to implement them on your own problems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    MLPACK is a C++ machine learning library with emphasis on scalability, speed, and ease-of-use. Its aim is to make machine learning possible for novice users by means of a simple, consistent API, while simultaneously exploiting C++ language features to provide maximum performance and flexibility for expert users. * More info + downloads: https://mlpack.org * Git repo: https://github.com/mlpack/mlpack
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    PyTorch3D

    PyTorch3D

    PyTorch3D is FAIR's library of reusable components for deep learning

    PyTorch3D is a comprehensive library for 3D deep learning that brings differentiable rendering, geometric operations, and 3D data structures into the PyTorch ecosystem. It’s designed to make it easy to build and train neural networks that work directly with 3D data such as meshes, point clouds, and implicit surfaces. The library provides fast GPU-accelerated implementations of rendering pipelines, transformations, rasterization, and lighting—making it possible to compute gradients through full 3D rendering processes. Researchers use it for tasks like shape generation, reconstruction, view synthesis, and visual reasoning. PyTorch3D also includes utilities for loading, transforming, and sampling 3D assets, so models can be trained end-to-end from 2D supervision or partial data. Its modular design allows easy extension—components like differentiable rasterizers, mesh blending, or signed distance field (SDF) modules can be swapped or combined to test new architectures quickly.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Robust Tube MPC

    Robust Tube MPC

    Example implementation for robust model predictive control using tube

    robust-tube-mpc is a MATLAB implementation of robust tube-based Model Predictive Control (MPC). The framework provides tools to design and simulate controllers that maintain stability and constraint satisfaction in the presence of bounded disturbances. Tube-based MPC achieves robustness by combining a nominal trajectory planner with an error feedback controller that keeps the actual system state within a "tube" around the nominal trajectory. This repository includes example scripts and implementations demonstrating how to apply the method to control problems. It is particularly useful for researchers, students, and engineers exploring robust control strategies in uncertain environments. By offering a structured implementation, robust-tube-mpc makes it easier to study and extend advanced MPC techniques for real-world applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    TensorFlow Course

    TensorFlow Course

    Simple and ready-to-use tutorials for TensorFlow

    This repository houses a highly popular (~16k stars) set of TensorFlow tutorials and example code aimed at beginners and intermediate users. It includes Jupyter notebooks and scripts that cover neural network fundamentals, model training, deployment, and more, with support for Google Colab.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    The Hypersim Dataset

    The Hypersim Dataset

    Photorealistic Synthetic Dataset for Holistic Indoor Scene

    Hypersim is a large-scale, photorealistic synthetic dataset and tooling suite for indoor scene understanding research. It provides richly annotated renderings—RGB, depth, surface normals, instance and semantic segmentations, and material/lighting metadata—produced from high-fidelity virtual environments. The dataset spans diverse furniture layouts, room types, and camera trajectories, enabling robust training for geometry, segmentation, and SLAM-adjacent tasks. Rendering pipelines and utilities allow researchers to reproduce sequences, generate novel views, or extract task-specific supervision. Because the data are perfectly labeled and controllable, Hypersim is well suited for pretraining and for studying domain transfer to real imagery. The repository acts as both a dataset index and a set of scripts for downloading, managing, and evaluating on standardized splits.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    tf2_course

    tf2_course

    Notebooks for my "Deep Learning with TensorFlow 2 and Keras" course

    tf2_course provides the notebooks for the “Deep Learning with TensorFlow 2 and Keras” course authored by the same author, Aurélien Géron. It is structured as a teaching toolkit: you’ll find notebooks covering neural networks with Keras, lower-level TensorFlow APIs, data loading & preprocessing, convolutional and recurrent networks, and deployment/distribution of models. The material is intended for learners who already have foundational knowledge of ML and wish to deepen their understanding of deep learning frameworks and practices. The repo supports experimentation: you can run code, tweak hyperparameters, and follow guided exercises that strengthen practical mastery. Rather than being book-based, it is course-based, meaning the flow, examples and structure lean toward interactive teaching and incremental builds. It’s well-suited for those who want a focused, deep-learning path rather than a broad ML textbook.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    vJEPA-2

    vJEPA-2

    PyTorch code and models for VJEPA2 self-supervised learning from video

    VJEPA2 is a next-generation self-supervised learning framework for video that extends the “predict in representation space” idea from i-JEPA to the temporal domain. Instead of reconstructing pixels, it predicts the missing high-level embeddings of masked space-time regions using a context encoder and a slowly updated target encoder. This objective encourages the model to learn semantics, motion, and long-range structure without the shortcuts that pixel-level losses can invite. The architecture is designed to scale: spatiotemporal ViT backbones, flexible masking schedules, and efficient sampling let it train on long clips while remaining stable. Trained representations transfer well to downstream tasks such as action recognition, temporal localization, and video retrieval, often with simple linear probes or light fine-tuning. The repository typically includes end-to-end recipes—data pipelines, augmentation policies, training scripts, and evaluation harnesses.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next