Open Source Python Machine Learning Software - Page 12

Python Machine Learning Software

View 441 business solutions

Browse free open source Python Machine Learning Software and projects below. Use the toggles on the left to filter open source Python Machine Learning Software by OS, license, language, programming language, and project status.

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    Guild AI

    Guild AI

    Experiment tracking, ML developer tools

    Guild AI is an open-source experiment tracking toolkit designed to bring systematic control to machine learning workflows, enabling users to build better models faster. It automatically captures every detail of training runs as unique experiments, facilitating comprehensive tracking and analysis. Users can compare and analyze runs to deepen their understanding and incrementally improve models. Guild AI simplifies hyperparameter tuning by applying state-of-the-art algorithms through straightforward commands, eliminating the need for complex trial setups. It also supports the automation of pipelines, accelerating model development, reducing errors, and providing measurable results. The toolkit is platform-agnostic, running on all major operating systems and integrating seamlessly with existing software engineering tools. Guild AI supports various remote storage types, including Amazon S3, Google Cloud Storage, Azure Blob Storage, and SSH servers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2

    HYBRYD

    Library written in C with Python API for IPv6 networking

    This project is a rewritten of an initial project that I've called GLUE and created in 2005. I'm trying to readapt it for Python 2.7.3 and GCC 4.6.3 The library has to be build as a simple Python extension using >python setup.py install and allows to create different kind of servers, clients or hybryds (clients-servers) over (TCP/UDP) using the Ipv6 Protocol. The architecture of the code is based on brain architecture. Will put an IPv6 adress active available as soon as possible so that you can download pieces of codes. The aim of that coding was to use primary linux commands easily codable and make an object of an IPv6 connection. Moreover, the model is full-state!
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Haiku

    Haiku

    JAX-based neural network library

    Haiku is a library built on top of JAX designed to provide simple, composable abstractions for machine learning research. Haiku is a simple neural network library for JAX that enables users to use familiar object-oriented programming models while allowing full access to JAX’s pure function transformations. Haiku is designed to make the common things we do such as managing model parameters and other model state simpler and similar in spirit to the Sonnet library that has been widely used across DeepMind. It preserves Sonnet’s module-based programming model for state management while retaining access to JAX’s function transformations. Haiku can be expected to compose with other libraries and work well with the rest of JAX. Similar to Sonnet modules, Haiku modules are Python objects that hold references to their own parameters, other modules, and methods that apply functions on user inputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Haiku Sonnet for JAX

    Haiku Sonnet for JAX

    JAX-based neural network library

    Haiku is a library built on top of JAX designed to provide simple, composable abstractions for machine learning research. JAX is a numerical computing library that combines NumPy, automatic differentiation, and first-class GPU/TPU support. Haiku is a simple neural network library for JAX that enables users to use familiar object-oriented programming models while allowing full access to JAX's pure function transformations. Haiku provides two core tools: a module abstraction, hk.Module, and a simple function transformation, hk.transform. hk.Modules are Python objects that hold references to their own parameters, other modules, and methods that apply functions on user inputs. hk.transform turns functions that use these object-oriented, functionally "impure" modules into pure functions that can be used with jax.jit, jax.grad, jax.pmap, etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 5
    Hamilton DAGWorks

    Hamilton DAGWorks

    Helps scientists define testable, modular, self-documenting dataflow

    Hamilton is a lightweight Python library for directed acyclic graphs (DAGs) of data transformations. Your DAG is portable; it runs anywhere Python runs, whether it's a script, notebook, Airflow pipeline, FastAPI server, etc. Your DAG is expressive; Hamilton has extensive features to define and modify the execution of a DAG (e.g., data validation, experiment tracking, remote execution). To create a DAG, write regular Python functions that specify their dependencies with their parameters. As shown below, it results in readable code that can always be visualized. Hamilton loads that definition and automatically builds the DAG for you. Hamilton brings modularity and structure to any Python application moving data: ETL pipelines, ML workflows, LLM applications, RAG systems, BI dashboards, and the Hamilton UI allows you to automatically visualize, catalog, and monitor execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    High-Level Training Utilities Pytorch

    High-Level Training Utilities Pytorch

    High-level training, data augmentation, and utilities for Pytorch

    Contains significant improvements, bug fixes, and additional support. Get it from the releases, or pull the master branch. This package provides a few things. A high-level module for Keras-like training with callbacks, constraints, and regularizers. Comprehensive data augmentation, transforms, sampling, and loading. Utility tensor and variable functions so you don't need numpy as often. Have any feature requests? Submit an issue! I'll make it happen. Specifically, any data augmentation, data loading, or sampling functions. ModuleTrainer. The ModuleTrainer class provides a high-level training interface that abstracts away the training loop while providing callbacks, constraints, initializers, regularizers, and more. You also have access to the standard evaluation and prediction functions. Torchsample provides a wide range of callbacks, generally mimicking the interface found in Keras.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Hivemind

    Hivemind

    Decentralized deep learning in PyTorch. Built to train models

    Hivemind is a PyTorch library for decentralized deep learning across the Internet. Its intended usage is training one large model on hundreds of computers from different universities, companies, and volunteers. Distributed training without a master node: Distributed Hash Table allows connecting computers in a decentralized network. Fault-tolerant backpropagation: forward and backward passes succeed even if some nodes are unresponsive or take too long to respond. Decentralized parameter averaging: iteratively aggregate updates from multiple workers without the need to synchronize across the entire network. Train neural networks of arbitrary size: parts of their layers are distributed across the participants with the Decentralized Mixture-of-Experts. If you have succesfully trained a model or created a downstream repository with the help of our library, feel free to submit a pull request that adds your project to the list.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    This program generates customizable hyper-surfaces (multi-dimensional input and output) and samples data from them to be used further as benchmark for response surface modeling tasks or optimization algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    An agent-based situated language learning simulation that focuses on lexical learning and grounding, featuring a unigram syntax structure and a CFG-based semantic grammar. Created as a MSc thesis project, using python.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Test your software product anywhere in the world Icon
    Test your software product anywhere in the world

    Get feedback from real people across 190+ countries with the devices, environments, and payment instruments you need for your perfect test.

    Global App Testing is a managed pool of freelancers used by Google, Meta, Microsoft, and other world-beating software companies.
    Try us today.
  • 10
    IMAGINE

    IMAGINE

    Biological image viewer and processor

    Detection, enumeration, and sizing of biological organisms by image analysis.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    IVY

    IVY

    The Unified Machine Learning Framework

    Take any code that you'd like to include. For example, an existing TensorFlow model, and some useful functions from both PyTorch and NumPy libraries. Choose any framework for writing your higher-level pipeline, including data loading, distributed training, analytics, logging, visualization etc. Choose any backend framework which should be used under the hood, for running this entire pipeline. Choose the most appropriate device or combination of devices for your needs. DeepMind releases an awesome model on GitHub, written in JAX. We'll use PerceiverIO as an example. Implement the model in PyTorch yourself, spending time and energy ensuring every detail is correct. Otherwise, wait for a PyTorch version to appear on GitHub, among the many re-implementation attempts that appear (a, b, c, d, e, f). Instantly transpile the JAX model to PyTorch. This creates an identical PyTorch equivalent of the original model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Imagen - Pytorch

    Imagen - Pytorch

    Implementation of Imagen, Google's Text-to-Image Neural Network

    Implementation of Imagen, Google's Text-to-Image Neural Network that beats DALL-E2, in Pytorch. It is the new SOTA for text-to-image synthesis. Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pre-trained T5 model (attention network). It also contains dynamic clipping for improved classifier-free guidance, noise level conditioning, and a memory-efficient unit design. It appears neither CLIP nor prior network is needed after all. And so research continues. For simpler training, you can directly supply text strings instead of precomputing text encodings. (Although for scaling purposes, you will definitely want to precompute the textual embeddings + mask)
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Implicit

    Implicit

    Fast Python collaborative filtering for implicit feedback datasets

    This project provides fast Python implementations of several different popular recommendation algorithms for implicit feedback datasets. All models have multi-threaded training routines, using Cython and OpenMP to fit the models in parallel among all available CPU cores. In addition, the ALS and BPR models both have custom CUDA kernels - enabling fitting on compatible GPU’s. This library also supports using approximate nearest neighbour libraries such as Annoy, NMSLIB and Faiss for speeding up making recommendations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Platform for parallel computation in the Amazon cloud, including machine learning ensembles written in R for computational biology and other areas of scientific research. Home to MR-Tandem, a hadoop-enabled fork of X!Tandem peptide search engine.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Intel Extension for PyTorch

    Intel Extension for PyTorch

    A Python package for extending the official PyTorch

    Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Intel neon

    Intel neon

    Intel® Nervana™ reference deep learning framework

    neon is Intel's reference deep learning framework committed to best performance on all hardware. Designed for ease of use and extensibility. See the new features in our latest release. We want to highlight that neon v2.0.0+ has been optimized for much better performance on CPUs by enabling Intel Math Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL that is used by neon is provided free of charge and downloaded automatically as part of the neon installation. The gpu backend is selected by default, so the above command is equivalent to if a compatible GPU resource is found on the system. The Intel Math Kernel Library takes advantages of the parallelization and vectorization capabilities of Intel Xeon and Xeon Phi systems. When hyperthreading is enabled on the system, we recommend the following KMP_AFFINITY setting to make sure parallel threads are 1:1 mapped to the available physical cores.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Kashgari

    Kashgari

    Kashgari is a production-level NLP Transfer learning framework

    Kashgari is a simple and powerful NLP Transfer learning framework, build a state-of-art model in 5 minutes for named entity recognition (NER), part-of-speech tagging (PoS), and text classification tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Keepsake

    Keepsake

    Version control for machine learning

    Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Google Cloud Storage. You can get the data back out using the command-line interface or a notebook.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Keras Attention Mechanism

    Keras Attention Mechanism

    Attention mechanism Implementation for Keras

    Many-to-one attention mechanism for Keras. We demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two LSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number of parameters for a fair comparison (250K). The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the top represents the attention map and the bottom the ground truth. As the training progresses, the model learns the task and the attention map converges to the ground truth. We consider many 1D sequences of the same length. The task is to find the maximum of each sequence. We give the full sequence processed by the RNN layer to the attention layer. We expect the attention layer to focus on the maximum of each sequence.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Keras TCN

    Keras TCN

    Keras Temporal Convolutional Network

    TCNs exhibit longer memory than recurrent architectures with the same capacity. Performs better than LSTM/GRU on a vast range of tasks (Seq. MNIST, Adding Problem, Copy Memory, Word-level PTB...). Parallelism (convolutional layers), flexible receptive field size (possible to specify how far the model can see), stable gradients (backpropagation through time, vanishing gradients). The usual way is to import the TCN layer and use it inside a Keras model. The receptive field is defined as the maximum number of steps back in time from current sample at time T, that a filter from (block, layer, stack, TCN) can hit (effective history) + 1. The receptive field of the TCN can be calculated. Once keras-tcn is installed as a package, you can take a glimpse of what is possible to do with TCNs. Some tasks examples are available in the repository for this purpose.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    KerasTuner

    KerasTuner

    A Hyperparameter Tuning Library for Keras

    KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily configure your search space with a define-by-run syntax, then leverage one of the available search algorithms to find the best hyperparameter values for your models. KerasTuner comes with Bayesian Optimization, Hyperband, and Random Search algorithms built-in, and is also designed to be easy for researchers to extend in order to experiment with new search algorithms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Key-book

    Key-book

    Proofs, cases, concept supplements, and reference explanations

    The book "Introduction to Machine Learning Theory" (hereinafter referred to as "Introduction") written by Zhou Zhihua, Wang Wei, Gao Wei, and other teachers fills the regret of the lack of introductory works on machine learning theory in China. This book attempts to provide an introductory guide for readers interested in learning machine learning theory and researching machine learning theory in an easy-to-understand language. "Guide" mainly covers seven parts, corresponding to seven important concepts or theoretical tools in machine learning theory, namely: learnability, (hypothesis space) complexity, generalization bound, stability, consistency, convergence rate, regret circle. Daoyin is a highly theoretical book, involving a large number of mathematical theorems and various proofs. Although the writing team has reduced the difficulty as much as possible, due to the nature of machine learning theory, the book still places high demands on the reader's mathematical background.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Spider that recollects data from MySpace Social Network. At now, it is only designed to extract information from native american people because it is used for a social science study in the UNAM (Universidad Nacional Autónoma de México).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24

    LWPR

    Locally Weighted Projection Regression (LWPR)

    Locally Weighted Projection Regression (LWPR) is a fully incremental, online algorithm for non-linear function approximation in high dimensional spaces, capable of handling redundant and irrelevant input dimensions. At its core, it uses locally linear models, spanned by a small number of univariate regressions in selected directions in input space. A locally weighted variant of Partial Least Squares (PLS) is employed for doing the dimensionality reduction. Please cite: [1] Sethu Vijayakumar, Aaron D'Souza and Stefan Schaal, Incremental Online Learning in High Dimensions, Neural Computation, vol. 17, no. 12, pp. 2602-2634 (2005). [2] Stefan Klanke, Sethu Vijayakumar and Stefan Schaal, A Library for Locally Weighted Projection Regression, Journal of Machine Learning Research (JMLR), vol. 9, pp. 623--626 (2008). More details and usage guidelines on the code website.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Lambda Networks

    Lambda Networks

    Implementation of LambdaNetworks, a new approach to image recognition

    Implementation of λ Networks, a new approach to image recognition that reaches SOTA on ImageNet. The new method utilizes λ layer, which captures interactions by transforming contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Shinel94 has added a Keras implementation! It won't be officially supported in this repository, so either copy / paste the code under ./lambda_networks/tfkeras.py or make sure to install tensorflow and keras before running the provided commands.
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.