Deep Learning Frameworks for Windows

View 27 business solutions

Browse free open source Deep Learning Frameworks and projects for Windows below. Use the toggles on the left to filter open source Deep Learning Frameworks by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Powerful Locator Software Icon
    Powerful Locator Software

    Free 30-day trial, no credit card required.

    Build a store locator, product finder, partner or dealer locator, where-to-buy feature, agent, hospital or physician finder in minutes with MetaLocator.
    Learn More
  • 1
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    The Open Source Computer Vision Library has >2500 algorithms, extensive documentation and sample code for real-time computer vision. It works on Windows, Linux, Mac OS X, Android, iOS in your browser through JavaScript. Languages: C++, Python, Julia, Javascript Homepage: https://opencv.org Q&A forum: https://forum.opencv.org/ Documentation: https://docs.opencv.org Source code: https://github.com/opencv Please pay special attention to our tutorials! https://docs.opencv.org/master Books about the OpenCV are described here: https://opencv.org/books.html
    Leader badge
    Downloads: 3,147 This Week
    Last Update:
    See Project
  • 2
    PyTorch

    PyTorch

    Open source machine learning framework

    PyTorch is a Python package that offers Tensor computation (like NumPy) with strong GPU acceleration and deep neural networks built on tape-based autograd system. This project allows for fast, flexible experimentation and efficient production. PyTorch consists of torch (Tensor library), torch.autograd (tape-based automatic differentiation library), torch.jit (a compilation stack [TorchScript]), torch.nn (neural networks library), torch.multiprocessing (Python multiprocessing), and torch.utils (DataLoader and other utility functions). PyTorch can be used as a replacement for Numpy, or as a deep learning research platform that provides optimum flexibility and speed.
    Downloads: 111 This Week
    Last Update:
    See Project
  • 3
    YOLOv3

    YOLOv3

    Object detection architectures and models pretrained on the COCO data

    Fast, precise and easy to train, YOLOv5 has a long and successful history of real time object detection. Treat YOLOv5 as a university where you'll feed your model information for it to learn from and grow into one integrated tool. You can get started with less than 6 lines of code. with YOLOv5 and its Pytorch implementation. Have a go using our API by uploading your own image and watch as YOLOv5 identifies objects using our pretrained models. Start training your model without being an expert. Students love YOLOv5 for its simplicity and there are many quickstart examples for you to get started within seconds. Export and deploy your YOLOv5 model with just 1 line of code. There are also loads of quickstart guides and tutorials available to get your model where it needs to be. Create state of the art deep learning models with YOLOv5
    Downloads: 91 This Week
    Last Update:
    See Project
  • 4
    AirSim

    AirSim

    A simulator for drones, cars and more, built on Unreal Engine

    AirSim is an open-source, cross platform simulator for drones, cars and more vehicles, built on Unreal Engine with an experimental Unity release in the works. It supports software-in-the-loop simulation with popular flight controllers such as PX4 & ArduPilot and hardware-in-loop with PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped into any Unreal environment. AirSim's development is oriented towards the goal of creating a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform independent way. AirSim is fully enabled for multiple vehicles. This capability allows you to create multiple vehicles easily and use APIs to control them.
    Downloads: 85 This Week
    Last Update:
    See Project
  • Budgyt Is The Highest Rated Business Budgeting Software In The Market. Icon
    Budgyt Is The Highest Rated Business Budgeting Software In The Market.

    Affordable budgeting software for companies with multiple users and multiple departments.

    Budgyt is an easy to use, intuitive platform with a clean simple interface that makes budgeting multiple P&L’s easy to do without needing Excel.
    Book a Demo
  • 5
    Netron

    Netron

    Visualizer for neural network, deep learning, machine learning models

    Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, Keras, TensorFlow Lite, Caffe, Darknet, Core ML, MNN, MXNet, ncnn, PaddlePaddle, Caffe2, Barracuda, Tengine, TNN, RKNN, MindSpore Lite, and UFF. Netron has experimental support for TensorFlow, PyTorch, TorchScript, OpenVINO, Torch, Arm NN, BigDL, Chainer, CNTK, Deeplearning4j, MediaPipe, ML.NET, scikit-learn, TensorFlow.js. There is an extense variety of sample model files to download or open using the browser version. It is supported by macOS, Windows, Linux, Python Server and browser.
    Downloads: 55 This Week
    Last Update:
    See Project
  • 6
    Video-subtitle-extractor

    Video-subtitle-extractor

    A GUI tool for extracting hard-coded subtitle (hardsub) from videos

    Video hard subtitle extraction, generate srt file. There is no need to apply for a third-party API, and text recognition can be implemented locally. A deep learning-based video subtitle extraction framework, including subtitle region detection and subtitle content extraction. A GUI tool for extracting hard-coded subtitles (hardsub) from videos and generating srt files. Use local OCR recognition, no need to set up and call any API, and do not need to access online OCR services such as Baidu and Ali to complete text recognition locally. Support GPU acceleration, after GPU acceleration, you can get higher accuracy and faster extraction speed. (CLI version) No need for users to manually set the subtitle area, the project automatically detects the subtitle area through the text detection model. Filter the text in the non-subtitle area and remove the watermark (station logo) text.
    Downloads: 54 This Week
    Last Update:
    See Project
  • 7
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 53 This Week
    Last Update:
    See Project
  • 8
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks. Use models trained with popular frameworks like TensorFlow, PyTorch and more. Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
    Downloads: 40 This Week
    Last Update:
    See Project
  • 9
    Computer Vision Annotation Tool (CVAT)

    Computer Vision Annotation Tool (CVAT)

    Interactive video and image annotation tool for computer vision

    Computer Vision Annotation Tool (CVAT) is a free and open source, interactive online tool for annotating videos and images for Computer Vision algorithms. It offers many powerful features, including automatic annotation using deep learning models, interpolation of bounding boxes between key frames, LDAP and more. It is being used by its own professional data annotation team to annotate millions of objects with different properties. The UX and UI were also specially developed by the team for computer vision tasks. CVAT supports several annotation formats. Format selection can be done after clicking on the Upload annotation and Dump annotation buttons.
    Downloads: 34 This Week
    Last Update:
    See Project
  • Astra Pentest Icon
    Astra Pentest

    Build trust in your business’s security with a comprehensive, hacker-style pentest.

    Astra’s Pentest is a comprehensive penetration testing solution with an intelligent automated vulnerability scanner coupled with in-depth manual pentesting.
    Learn More
  • 10
    ncnn

    ncnn

    High-performance neural network inference framework for mobile

    ncnn is a high-performance neural network inference computing framework designed specifically for mobile platforms. It brings artificial intelligence right at your fingertips with no third-party dependencies, and speeds faster than all other known open source frameworks for mobile phone cpu. ncnn allows developers to easily deploy deep learning algorithm models to the mobile platform and create intelligent APPs. It is cross-platform and supports most commonly used CNN networks, including Classical CNN (VGG AlexNet GoogleNet Inception), Face Detection (MTCNN RetinaFace), Segmentation (FCN PSPNet UNet YOLACT), and more. ncnn is currently being used in a number of Tencent applications, namely: QQ, Qzone, WeChat, and Pitu.
    Downloads: 33 This Week
    Last Update:
    See Project
  • 11
    Hello AI World

    Hello AI World

    Guide to deploying deep-learning inference networks

    Hello AI World is a great way to start using Jetson and experiencing the power of AI. In just a couple of hours, you can have a set of deep learning inference demos up and running for realtime image classification and object detection on your Jetson Developer Kit with JetPack SDK and NVIDIA TensorRT. The tutorial focuses on networks related to computer vision, and includes the use of live cameras. You’ll also get to code your own easy-to-follow recognition program in Python or C++, and train your own DNN models onboard Jetson with PyTorch. Ready to dive into deep learning? It only takes two days. We’ll provide you with all the tools you need, including easy to follow guides, software samples such as TensorRT code, and even pre-trained network models including ImageNet and DetectNet examples. Follow these directions to integrate deep learning into your platform of choice and quickly develop a proof-of-concept design.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 12
    ONNX

    ONNX

    Open standard for machine learning interoperability

    ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring). ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 13
    GROBID

    GROBID

    A machine learning software for extracting information

    GROBID is a machine learning library for extracting, parsing, and re-structuring raw documents such as PDF into structured XML/TEI encoded documents with a particular focus on technical and scientific publications. First developments started in 2008 as a hobby. In 2011 the tool has been made available in open source. Work on GROBID has been steady as a side project since the beginning and is expected to continue as such. Header extraction and parsing from article in PDF format. The extraction here covers the usual bibliographical information (e.g. title, abstract, authors, affiliations, keywords, etc.). References extraction and parsing from articles in PDF format, around .87 F1-score against on an independent PubMed Central set of 1943 PDF containing 90,125 references, and around .89 on a similar bioRxiv set of 2000 PDF (using the Deep Learning citation model). All the usual publication metadata are covered (including DOI, PMID, etc.).
    Downloads: 16 This Week
    Last Update:
    See Project
  • 14
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded, or automotive product platforms. TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. With new NVIDIA Ampere Architecture GPUs, TensorRT also leverages sparse tensor cores providing an additional performance boost.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 15
    GIMP ML

    GIMP ML

    AI for GNU Image Manipulation Program

    This repository introduces GIMP3-ML, a set of Python plugins for the widely popular GNU Image Manipulation Program (GIMP). It enables the use of recent advances in computer vision to the conventional image editing pipeline. Applications from deep learning such as monocular depth estimation, semantic segmentation, mask generative adversarial networks, image super-resolution, de-noising and coloring have been incorporated with GIMP through Python-based plugins. Additionally, operations on images such as edge detection and color clustering have also been added. GIMP-ML relies on standard Python packages such as numpy, scikit-image, pillow, pytorch, open-cv, scipy. In addition, GIMP-ML also aims to bring the benefits of using deep learning networks used for computer vision tasks to routine image processing workflows.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 16
    MIT Deep Learning Book

    MIT Deep Learning Book

    MIT Deep Learning Book in PDF format by Ian Goodfellow

    The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free. MIT Deep Learning Book in PDF format (complete and parts) by Ian Goodfellow, Yoshua Bengio and Aaron Courville. An MIT Press book Ian Goodfellow and Yoshua Bengio and Aaron Courville. Written by three experts in the field, Deep Learning is the only comprehensive book on the subject. This is not available as PDF download. So, I have taken the prints of the HTML content and bound them into a flawless PDF version of the book, as suggested by the website itself. Printing seems to work best printing directly from the browser, using Chrome. Other browsers do not work as well.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 17
    DocTR

    DocTR

    Library for OCR-related tasks powered by Deep Learning

    DocTR provides an easy and powerful way to extract valuable information from your documents. Seemlessly process documents for Natural Language Understanding tasks: we provide OCR predictors to parse textual information (localize and identify each word) from your documents. Robust 2-stage (detection + recognition) OCR predictors with pretrained parameters. User-friendly, 3 lines of code to load a document and extract text with a predictor. State-of-the-art performances on public document datasets, comparable with GoogleVision/AWS Textract. Easy integration (available templates for browser demo & API deployment). End-to-End OCR is achieved in docTR using a two-stage approach: text detection (localizing words), then text recognition (identify all characters in the word). As such, you can select the architecture used for text detection, and the one for text recognition from the list of available implementations.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 18
    DeepLearning

    DeepLearning

    Deep Learning (Flower Book) mathematical derivation

    " Deep Learning " is the only comprehensive book in the field of deep learning. The full name is also called the Deep Learning AI Bible (Deep Learning) . It is edited by three world-renowned experts, Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Includes linear algebra, probability theory, information theory, numerical optimization, and related content in machine learning. At the same time, it also introduces deep learning techniques used by practitioners in the industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling and practical methods, and investigates topics such as natural language processing, Applications in speech recognition, computer vision, online recommender systems, bioinformatics, and video games. Finally, the Deep Learning book provides research directions covering theoretical topics including linear factor models, autoencoders, representation learning, structured probabilistic models, etc.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 19
    HanLP

    HanLP

    Han Language Processing

    HanLP is a multilingual Natural Language Processing (NLP) library composed of a series of models and algorithms. Built on TensorFlow 2.0, it was designed to advance state-of-the-art deep learning techniques and popularize the application of natural language processing in both academia and industry. HanLP is capable of lexical analysis (Chinese word segmentation, part-of-speech tagging, named entity recognition), syntax analysis, text classification, and sentiment analysis. It comes with pretrained models for numerous languages including Chinese and English. It offers efficient performance, clear structure and customizable features, with plenty more amazing features to look forward to on the roadmap.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 20
    Keras

    Keras

    Python-based neural networks API

    Python Deep Learning library
    Downloads: 7 This Week
    Last Update:
    See Project
  • 21
    Large Language Models (LLMs)

    Large Language Models (LLMs)

    Connect MATLAB to LLM APIs, including OpenAI® Chat Completions

    This repository enables MATLAB to connect with large language models (LLMs) such as OpenAI's ChatGPT, DALL-E, Azure OpenAI, and Ollama, integrating their natural language processing and image generation capabilities directly within MATLAB environments. It facilitates creating chatbots, summarizing text, and image generation, among other tasks.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 22
    Porcupine

    Porcupine

    On-device wake word detection powered by deep learning

    Build always-listening yet private voice applications. Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications. It is using deep neural networks trained in real-world environments. Compact and computationally-efficient. It is perfect for IoT. Cross-platform. Arm Cortex-M, STM32, PSoC, Arduino, and i.MX RT. Raspberry Pi, NVIDIA Jetson Nano, and BeagleBone. Android and iOS. Chrome, Safari, Firefox, and Edge. Linux (x86_64), macOS (x86_64, arm64), and Windows (x86_64). Scalable. It can detect multiple always-listening voice commands with no added runtime footprint. Self-service. Developers can train custom wake word models using Picovoice Console. Porcupine is the right product if you need to detect one or a few static (always-listening) voice commands. If you want to create voice experiences similar to Alexa or Google, see the Picovoice platform.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 23
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT’s suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 24
    dlib

    dlib

    Toolkit for making machine learning and data analysis applications

    Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib's open source licensing allows you to use it in any application, free of charge. Good unit test coverage, the ratio of unit test lines of code to library lines of code is about 1 to 4. The library is tested regularly on MS Windows, Linux, and Mac OS X systems. No other packages are required to use the library, only APIs that are provided by an out of the box OS are needed. There is no installation or configure step needed before you can use the library. All operating system specific code is isolated inside the OS abstraction layers which are kept as small as possible.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 25
    Transformers

    Transformers

    State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX

    Transformers provides APIs and tools to easily download and train state-of-the-art pre-trained models. Using pre-trained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities. Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages. Images, for tasks like image classification, object detection, and segmentation. Audio, for tasks like speech recognition and audio classification. Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next