Open Source Python Machine Learning Software - Page 9

Python Machine Learning Software

View 447 business solutions

Browse free open source Python Machine Learning Software and projects below. Use the toggles on the left to filter open source Python Machine Learning Software by OS, license, language, programming language, and project status.

  • Deploy Apps in Seconds with Cloud Run Icon
    Deploy Apps in Seconds with Cloud Run

    Host and run your applications without the need to manage infrastructure. Scales up from and down to zero automatically.

    Cloud Run is the fastest way to deploy containerized apps. Push your code in Go, Python, Node.js, Java, or any language and Cloud Run builds and deploys it automatically. Get fast autoscaling, pay only when your code runs, and skip the infrastructure headaches. Two million requests free per month. And new customers get $300 in free credit.
    Try Cloud Run Free
  • Build AI Apps with Gemini 3 on Vertex AI Icon
    Build AI Apps with Gemini 3 on Vertex AI

    Access Google’s most capable multimodal models. Train, test, and deploy AI with 200+ foundation models on one platform.

    Vertex AI gives developers access to Gemini 3—Google’s most advanced reasoning and coding model—plus 200+ foundation models including Claude, Llama, and Gemma. Build generative AI apps with Vertex AI Studio, customize with fine-tuning, and deploy to production with enterprise-grade MLOps. New customers get $300 in free credits.
    Try Vertex AI Free
  • 1
    NLP Architect

    NLP Architect

    A model library for exploring state-of-the-art deep learning

    NLP Architect is an open-source Python library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing and Natural Language Understanding neural networks. The library includes our past and ongoing NLP research and development efforts as part of Intel AI Lab. NLP Architect is designed to be flexible for adding new models, neural network components, data handling methods, and for easy training and running models. NLP Architect is a model-oriented library designed to showcase novel and different neural network optimizations. The library contains NLP/NLU-related models per task, different neural network topologies (which are used in models), procedures for simplifying workflows in the library, pre-defined data processors and dataset loaders and misc utilities. The library is designed to be a tool for model development: data pre-processing, build model, train, validate, infer, save or load a model.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    NLP Best Practices

    NLP Best Practices

    Natural Language Processing Best Practices & Examples

    In recent years, natural language processing (NLP) has seen quick growth in quality and usability, and this has helped to drive business adoption of artificial intelligence (AI) solutions. In the last few years, researchers have been applying newer deep learning methods to NLP. Data scientists started moving from traditional methods to state-of-the-art (SOTA) deep neural network (DNN) algorithms which use language models pretrained on large text corpora. This repository contains examples and best practices for building NLP systems, provided as Jupyter notebooks and utility functions. The focus of the repository is on state-of-the-art methods and common scenarios that are popular among researchers and practitioners working on problems involving text and language. The goal of this repository is to build a comprehensive set of tools and examples that leverage recent advances in NLP algorithms, neural architectures, and distributed machine learning systems.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. Every module can easily be customized, extended, and composed to create new conversational AI model architectures. Conversational AI architectures are typically large and require a lot of data and compute for training. NeMo uses PyTorch Lightning for easy and performant multi-GPU/multi-node mixed-precision training. Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, Squeezeformer-CTC, Squeezeformer-Transducer, ContextNet, LSTM-Transducer (RNNT), LSTM-CTC. NGC collection of pre-trained speech processing models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    NeuralProphet

    NeuralProphet

    A simple forecasting package

    NeuralProphet bridges the gap between traditional time-series models and deep learning methods. It's based on PyTorch and can be installed using pip. A Neural Network based Time-Series model, inspired by Facebook Prophet and AR-Net, built on PyTorch. You can find the datasets used in the tutorials, including data preprocessing examples, in our neuralprophet-data repository. The documentation page may not we entirely up to date. Docstrings should be reliable, please refer to those when in doubt. We are working on an improved documentation. We appreciate any help to improve and update the docs. Lagged regressors (measured features, e.g temperature sensor). Future regressors (in advance known features, e.g. temperature forecast). Country holidays & recurring special events. Sparsity of coefficients through regularization. Plotting for forecast components, model coefficients as well as final predictions. Automatic selection of training-related hyperparameters.
    Downloads: 1 This Week
    Last Update:
    See Project
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 5
    NeuroNER

    NeuroNER

    Named-entity recognition using neural networks

    Named-entity recognition (NER) aims at identifying entities of interest in the text, such as location, organization and temporal expression. Identified entities can be used in various downstream applications such as patient note de-identification and information extraction systems. They can also be used as features for machine learning systems for other natural language processing tasks. Leverages the state-of-the-art prediction capabilities of neural networks (a.k.a. "deep learning") Is cross-platform, open source, freely available, and straightforward to use. Enables the users to create or modify annotations for a new or existing corpus. Train the neural network that performs the NER. During the training, NeuroNER allows monitoring of the network. Evaluate the quality of the predictions made by NeuroNER. The performance metrics can be calculated and plotted by comparing the predicted labels with the gold labels.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    NiftyNet

    NiftyNet

    An open-source convolutional neural networks platform for research

    An open-source convolutional neural networks platform for medical image analysis and image-guided therapy. NiftyNet is a TensorFlow-based open-source convolutional neural networks (CNNs) platform for research in medical image analysis and image-guided therapy. NiftyNet’s modular structure is designed for sharing networks and pre-trained models. Using this modular structure you can get started with established pre-trained networks using built-in tools. Adapt existing networks to your imaging data. Quickly build new solutions to your own image analysis problems. NiftyNet currently supports medical image segmentation and generative adversarial networks. NiftyNet is not intended for clinical use.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Open Notebook

    Open Notebook

    An Open Source implementation of Notebook LM with more flexibility

    Open Notebook is an open-source, privacy-focused alternative to Google’s Notebook LM that gives users full control over their research and AI workflows. Designed to be self-hosted, it ensures complete data sovereignty by keeping your content local or within your own infrastructure. The platform supports 16+ AI providers—including OpenAI, Anthropic, Ollama, Google, and LM Studio—allowing flexible model choice and cost optimization. Open Notebook enables users to organize and analyze multi-modal content such as PDFs, videos, audio files, web pages, and Office documents. It combines full-text and vector search with context-aware AI chat to deliver insights grounded in your own research materials. With advanced features like multi-speaker podcast generation, customizable content transformations, and a comprehensive REST API, Open Notebook provides a powerful and extensible research environment.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    OpenBB

    OpenBB

    Investment Research for Everyone, Everywhere

    Customize and speed up your analysis, bring your own data, and create instant reports to gain a competitive edge. Whether it’s a CSV file, a private endpoint, an RSS feed, or even embed an SEC filing directly. Chat with financial data using large language models. Don’t waste time reading, create summaries in seconds and ask how that impacts investments. Create your dashboard with your favorite widgets. Create charts directly from raw data in seconds. Create charts directly from raw data in seconds. Customize your dashboards to build your dream terminal, integrate with your private datasets and bring your own fine-tuned AI copilots.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Optax

    Optax

    Optax is a gradient processing and optimization library for JAX

    Optax is a gradient processing and optimization library for JAX. It is designed to facilitate research by providing building blocks that can be recombined in custom ways in order to optimize parametric models such as, but not limited to, deep neural networks. We favor focusing on small composable building blocks that can be effectively combined into custom solutions. Others may build upon these basic components in more complicated abstractions. Whenever reasonable, implementations prioritize readability and structuring code to match standard equations, over code reuse.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Build on Google Cloud with $300 in Free Credit Icon
    Build on Google Cloud with $300 in Free Credit

    New to Google Cloud? Get $300 in free credit to explore Compute Engine, BigQuery, Cloud Run, Vertex AI, and 150+ other products.

    Start your next project with $300 in free Google Cloud credit. Spin up VMs, run containers, query exabytes in BigQuery, or build AI apps with Vertex AI and Gemini. Once your credits are used, keep building with 20+ products with free monthly usage, including Compute Engine, Cloud Storage, GKE, and Cloud Run functions. Sign up to start building right away.
    Start Free Trial
  • 10
    Pandas Profiling

    Pandas Profiling

    Create HTML profiling reports from pandas DataFrame objects

    pandas-profiling generates profile reports from a pandas DataFrame. The pandas df.describe() function is handy yet a little basic for exploratory data analysis. pandas-profiling extends pandas DataFrame with df.profile_report(), which automatically generates a standardized univariate and multivariate report for data understanding. High correlation warnings, based on different correlation metrics (Spearman, Pearson, Kendall, Cramér’s V, Phik). Most common categories (uppercase, lowercase, separator), scripts (Latin, Cyrillic) and blocks (ASCII, Cyrilic). File sizes, creation dates, dimensions, indication of truncated images and existance of EXIF metadata. Mostly global details about the dataset (number of records, number of variables, overall missigness and duplicates, memory footprint). Comprehensive and automatic list of potential data quality issues (high correlation, skewness, uniformity, zeros, missing values, constant values, between others).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    PromptTools

    PromptTools

    Open-source tools for prompt testing and experimentation

    Welcome to prompttools created by Hegel AI! This repo offers a set of open-source, self-hostable tools for experimenting with, testing, and evaluating LLMs, vector databases, and prompts. The core idea is to enable developers to evaluate using familiar interfaces like code, notebooks, and a local playground.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    PyTorch Geometric

    PyTorch Geometric

    Geometric deep learning extension library for PyTorch

    It consists of various methods for deep learning on graphs and other irregular structures, also known as geometric deep learning, from a variety of published papers. In addition, it consists of an easy-to-use mini-batch loader for many small and single giant graphs, a large number of common benchmark datasets (based on simple interfaces to create your own), and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds. We have outsourced a lot of functionality of PyTorch Geometric to other packages, which needs to be additionally installed. These packages come with their own CPU and GPU kernel implementations based on C++/CUDA extensions. We do not recommend installation as root user on your system python. Please setup an Anaconda/Miniconda environment or create a Docker image. We provide pip wheels for all major OS/PyTorch/CUDA combinations.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    PyTorch Geometric Temporal

    PyTorch Geometric Temporal

    Spatiotemporal Signal Processing with Neural Machine Learning Models

    The library consists of various dynamic and temporal geometric deep learning, embedding, and Spatio-temporal regression methods from a variety of published research papers. Moreover, it comes with an easy-to-use dataset loader, train-test splitter and temporal snaphot iterator for dynamic and temporal graphs. The framework naturally provides GPU support. It also comes with a number of benchmark datasets from the epidemiological forecasting, sharing economy, energy production and web traffic management domains. Finally, you can also create your own datasets. The package interfaces well with Pytorch Lightning which allows training on CPUs, single and multiple GPUs out-of-the-box. PyTorch Geometric Temporal makes implementing Dynamic and Temporal Graph Neural Networks quite easy - see the accompanying tutorial. Head over to our documentation to find out more about installation, creation of datasets and a full list of implemented methods and available datasets.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    PyTorch SimCLR

    PyTorch SimCLR

    PyTorch implementation of SimCLR: A Simple Framework

    For quite some time now, we know about the benefits of transfer learning in Computer Vision (CV) applications. Nowadays, pre-trained Deep Convolution Neural Networks (DCNNs) are the first go-to pre-solutions to learn a new task. These large models are trained on huge supervised corpora, like the ImageNet. And most important, their features are known to adapt well to new problems. This is particularly interesting when annotated training data is scarce. In situations like this, we take the models’ pre-trained weights, append a new classifier layer on top of it, and retrain the network. This is called transfer learning, and is one of the most used techniques in CV. Aside from a few tricks when performing fine-tuning (if the case), it has been shown (many times) that if training for a new task, models initialized with pre-trained weights tend to learn faster and be more accurate then training from scratch using random initialization.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    SAHI

    SAHI

    A lightweight vision library for performing large object detection

    A lightweight vision library for performing large-scale object detection & instance segmentation. Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities. Detection of small objects and objects far away in the scene is a major challenge in surveillance applications. Such objects are represented by small number of pixels in the image and lack sufficient details, making them difficult to detect using conventional detectors. In this work, an open-source framework called Slicing Aided Hyper Inference (SAHI) is proposed that provides a generic slicing aided inference and fine-tuning pipeline for small object detection.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    SSD Keras

    SSD Keras

    A Keras port of single shot MultiBox detector

    This is a Keras port of the SSD model architecture introduced by Wei Liu et al. in the paper SSD: Single Shot MultiBox Detector. Ports of the trained weights of all the original models are provided below. This implementation is accurate, meaning that both the ported weights and models trained from scratch produce the same mAP values as the respective models of the original Caffe implementation. The main goal of this project is to create an SSD implementation that is well documented for those who are interested in a low-level understanding of the model. The provided tutorials, documentation and detailed comments hopefully make it a bit easier to dig into the code and adapt or build upon the model than with most other implementations out there (Keras or otherwise) that provide little to no documentation and comments. Use one of the provided trained models for transfer learning.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    SSD in PyTorch 1.0

    SSD in PyTorch 1.0

    High quality, fast, modular reference implementation of SSD in PyTorch

    This repository implements SSD (Single Shot MultiBox Detector). The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for research based on SSD. Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly. Add your own modules without pain. We abstract backbone, Detector, BoxHead, BoxPredictor, etc. You can replace every component with your own code without changing the code base. For example, You can add EfficientNet as the backbone, just add efficient_net.py (ALREADY ADDED) and register it, specific it in the config file, It's done! Smooth and enjoyable training procedure: we save the state of model, optimizer, scheduler, training iter, you can stop your training and resume training exactly from the save point without change your training CMD.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    SageMaker Python SDK

    SageMaker Python SDK

    Training and deploying machine learning models on Amazon SageMaker

    SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. With the SDK, you can train and deploy models using popular deep learning frameworks Apache MXNet and TensorFlow. You can also train and deploy models with Amazon algorithms, which are scalable implementations of core machine learning algorithms that are optimized for SageMaker and GPU training. If you have your own algorithms built into SageMaker-compatible Docker containers, you can train and host models using these as well.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    SageMaker Scikit-Learn Extension

    SageMaker Scikit-Learn Extension

    A library of additional estimators and SageMaker tools based on scikit

    A library of additional estimators and SageMaker tools based on scikit-learn. This project contains standalone scikit-learn estimators and additional tools to support SageMaker Autopilot. Many of the additional estimators are based on existing scikit-learn estimators. SageMaker Scikit-Learn Extension is a Python module for machine learning built on top of scikit-learn. In order to use the I/O functionalies in the sagemaker_sklearn_extension.externals module, you will also need to install the mlio version 0.7 package via conda. The mlio package is only available through conda at the moment. You can also install from source by cloning this repository and running a pip install command in the root directory of the repository. For unit tests, tox will use pytest to run the unit tests in a Python 3.7 interpreter. tox will also run flake8 and pylint for style checks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Scanpy

    Scanpy

    Single-cell analysis in Python

    Scanpy is a scalable toolkit for analyzing single-cell gene expression data built jointly with anndata. It includes preprocessing, visualization, clustering, trajectory inference and differential expression testing. The Python-based implementation efficiently deals with datasets of more than one million cells.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    SkyPilot

    SkyPilot

    SkyPilot: Run AI and batch jobs on any infra

    SkyPilot is a framework for running AI and batch workloads on any infra, offering unified execution, high cost savings, and high GPU availability. Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Sockeye

    Sockeye

    Sequence-to-sequence framework, focused on Neural Machine Translation

    Sockeye is an open-source sequence-to-sequence framework for Neural Machine Translation built on PyTorch. It implements distributed training and optimized inference for state-of-the-art models, powering Amazon Translate and other MT applications. For a quickstart guide to training a standard NMT model on any size of data, see the WMT 2014 English-German tutorial. If you are interested in collaborating or have any questions, please submit a pull request or issue. You can also send questions to sockeye-dev-at-amazon-dot-com. Developers may be interested in our developer guidelines. Starting with version 3.0.0, Sockeye is also based on PyTorch. We maintain backwards compatibility with MXNet models of version 2.3.x with 3.0.x. If MXNet 2.x is installed, Sockeye can run both with PyTorch or MXNet. All models trained with 2.3.x (using MXNet) can be converted to models running with PyTorch using the converter CLI (sockeye.mx_to_pt).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Spinning Up in Deep RL

    Spinning Up in Deep RL

    Educational resource to help anyone learn deep reinforcement learning

    Welcome to Spinning Up in Deep RL! This is an educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning (deep RL). For the unfamiliar, reinforcement learning (RL) is a machine learning approach for teaching agents how to solve tasks by trial and error. Deep RL refers to the combination of RL with deep learning. At OpenAI, we believe that deep learning generally, and deep reinforcement learning specifically, will play central roles in the development of powerful AI technology. To ensure that AI is safe, we have to come up with safety strategies and algorithms that are compatible with this paradigm. As a result, we encourage everyone who asks this question to study these fields. However, while there are many resources to help people quickly ramp up on deep learning, deep reinforcement learning is more challenging to break into.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    StudioGAN

    StudioGAN

    StudioGAN is a Pytorch library providing implementations of networks

    StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea. Moreover, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U). StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives. Each modularized option is managed through a configuration system that works through a YAML file.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    Tensor2Tensor

    Tensor2Tensor

    Library of deep learning models and datasets

    Deep Learning (DL) has enabled the rapid advancement of many useful technologies, such as machine translation, speech recognition and object detection. In the research community, one can find code open-sourced by the authors to help in replicating their results and further advancing deep learning. However, most of these DL systems use unique setups that require significant engineering effort and may only work for a specific problem or architecture, making it hard to run new experiments and compare the results. Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. T2T was developed by researchers and engineers in the Google Brain team and a community of users. It is now deprecated, we keep it running and welcome bug-fixes, but encourage users to use the successor library Trax.
    Downloads: 1 This Week
    Last Update:
    See Project