Open Source Python Machine Learning Software - Page 6

Python Machine Learning Software

View 447 business solutions

Browse free open source Python Machine Learning Software and projects below. Use the toggles on the left to filter open source Python Machine Learning Software by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 1
    KAIR

    KAIR

    Image Restoration Toolbox (PyTorch). Training and testing codes

    Image restoration toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSR/GAN, SwinIR.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    Keras TCN

    Keras TCN

    Keras Temporal Convolutional Network

    TCNs exhibit longer memory than recurrent architectures with the same capacity. Performs better than LSTM/GRU on a vast range of tasks (Seq. MNIST, Adding Problem, Copy Memory, Word-level PTB...). Parallelism (convolutional layers), flexible receptive field size (possible to specify how far the model can see), stable gradients (backpropagation through time, vanishing gradients). The usual way is to import the TCN layer and use it inside a Keras model. The receptive field is defined as the maximum number of steps back in time from current sample at time T, that a filter from (block, layer, stack, TCN) can hit (effective history) + 1. The receptive field of the TCN can be calculated. Once keras-tcn is installed as a package, you can take a glimpse of what is possible to do with TCNs. Some tasks examples are available in the repository for this purpose.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    Lazy Predict

    Lazy Predict

    Lazy Predict help build a lot of basic models without much code

    Lazy Predict helps build a lot of basic models without much code and helps understand which models work better without any parameter tuning.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    Machine Learning PyTorch Scikit-Learn

    Machine Learning PyTorch Scikit-Learn

    Code Repository for Machine Learning with PyTorch and Scikit-Learn

    Initially, this project started as the 4th edition of Python Machine Learning. However, after putting so much passion and hard work into the changes and new topics, we thought it deserved a new title. So, what’s new? There are many contents and additions, including the switch from TensorFlow to PyTorch, new chapters on graph neural networks and transformers, a new section on gradient boosting, and many more that I will detail in a separate blog post. For those who are interested in knowing what this book covers in general, I’d describe it as a comprehensive resource on the fundamental concepts of machine learning and deep learning. The first half of the book introduces readers to machine learning using scikit-learn, the defacto approach for working with tabular datasets. Then, the second half of this book focuses on deep learning, including applications to natural language processing and computer vision.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • 5
    Mlxtend

    Mlxtend

    A library of extension and helper modules for Python's data analysis

    Mlxtend (machine learning extensions) is a Python library of useful tools for day-to-day data science tasks.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. Every module can easily be customized, extended, and composed to create new conversational AI model architectures. Conversational AI architectures are typically large and require a lot of data and compute for training. NeMo uses PyTorch Lightning for easy and performant multi-GPU/multi-node mixed-precision training. Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, Squeezeformer-CTC, Squeezeformer-Transducer, ContextNet, LSTM-Transducer (RNNT), LSTM-CTC. NGC collection of pre-trained speech processing models.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    NeuralForecast

    NeuralForecast

    Scalable and user friendly neural forecasting algorithms.

    NeuralForecast offers a large collection of neural forecasting models focusing on their performance, usability, and robustness. The models range from classic networks like RNNs to the latest transformers: MLP, LSTM, GRU, RNN, TCN, TimesNet, BiTCN, DeepAR, NBEATS, NBEATSx, NHITS, TiDE, DeepNPTS, TSMixer, TSMixerx, MLPMultivariate, DLinear, NLinear, TFT, Informer, AutoFormer, FedFormer, PatchTST, iTransformer, StemGNN, and TimeLLM. There is a shared belief in Neural forecasting methods' capacity to improve forecasting pipeline's accuracy and efficiency. Unfortunately, available implementations and published research are yet to realize neural networks' potential. They are hard to use and continuously fail to improve over statistical methods while being computationally prohibitive. For this reason, we created NeuralForecast, a library favoring proven accurate and efficient models focusing on their usability.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    Nixtla TimeGPT

    Nixtla TimeGPT

    TimeGPT-1: production ready pre-trained Time Series Foundation Model

    TimeGPT is a production ready, generative pretrained transformer for time series. It's capable of accurately predicting various domains such as retail, electricity, finance, and IoT with just a few lines of code. Whether you're a bank forecasting market trends or a startup predicting product demand, TimeGPT democratizes access to cutting-edge predictive insights, eliminating the need for a dedicated team of machine learning engineers. A generative model for time series. TimeGPT is capable of accurately predicting various domains such as retail, electricity, finance, and IoT.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    OpenVINO Training Extensions

    OpenVINO Training Extensions

    Trainable models and NN optimization tools

    OpenVINO™ Training Extensions provide a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference. When ote_cli is installed in the virtual environment, you can use the ote command line interface to perform various actions for templates related to the chosen task type, such as running, training, evaluating, exporting, etc. ote train trains a model (a particular model template) on a dataset and saves results in two files. ote optimize optimizes a pre-trained model using NNCF or POT depending on the model format. NNCF optimization used for trained snapshots in a framework-specific format. POT optimization used for models exported in the OpenVINO IR format.
    Downloads: 3 This Week
    Last Update:
    See Project
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 10
    Seldon Core

    Seldon Core

    An MLOps framework to package, deploy, monitor and manage models

    The de facto standard open-source platform for rapidly deploying machine learning models on Kubernetes. Seldon Core, our open-source framework, makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. Seldon Core serves models built in any open-source or commercial model building framework. You can make use of powerful Kubernetes features like custom resource definitions to manage model graphs. And then connect your continuous integration and deployment (CI/CD) tools to scale and update your deployment. Built on Kubernetes, runs on any cloud and on-premises. Framework agnostic, supports top ML libraries, toolkits and languages. Advanced deployments with experiments, ensembles and transformers. Our open-source framework makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    Sonnet

    Sonnet

    TensorFlow-based neural network library

    Sonnet is a neural network library built on top of TensorFlow designed to provide simple, composable abstractions for machine learning research. Sonnet can be used to build neural networks for various purposes, including different types of learning. Sonnet’s programming model revolves around a single concept: modules. These modules can hold references to parameters, other modules and methods that apply some function on the user input. There are a number of predefined modules that already ship with Sonnet, making it quite powerful and yet simple at the same time. Users are also encouraged to build their own modules. Sonnet is designed to be extremely unopinionated about your use of modules. It is simple to understand, and offers clear and focused code.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    StatsForecast

    StatsForecast

    Fast forecasting with statistical and econometric models

    StatsForecast is a Python library for time-series forecasting that delivers a suite of classical statistical and econometric forecasting models optimized for high performance and scalability. It is designed not just for academic experiments but for production-level time-series forecasting, meaning it handles forecasting for many series at once, efficiently, reliably, and with minimal overhead. The library implements a broad set of models, including AutoARIMA, ETS, CES, Theta, plus a battery of benchmarking and baseline methods, giving users flexibility in selecting forecasting approaches depending on data characteristics (trend, seasonality, intermittent demand, etc.). Its internal implementation leverages numba to compile performance-critical code to optimized machine-level instructions, which makes the models much faster than many traditional Python counterparts.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    Transformer Reinforcement Learning X

    Transformer Reinforcement Learning X

    A repo for distributed training of language models with Reinforcement

    trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset. Training support for Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b, EleutherAI/gpt-neox-20b, and google/flan-t5-xxl. For models beyond 20B parameters, trlX provides NVIDIA NeMo-backed trainers that leverage efficient parallelism techniques to scale effectively.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    Weights and Biases

    Weights and Biases

    Tool for visualizing and tracking your machine learning experiments

    Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production models. Quickly identify model regressions. Use W&B to visualize results in real time, all in a central dashboard. Focus on the interesting ML. Spend less time manually tracking results in spreadsheets and text files. Capture dataset versions with W&B Artifacts to identify how changing data affects your resulting models. Reproduce any model, with saved code, hyperparameters, launch commands, input data, and resulting model weights. Set wandb.config once at the beginning of your script to save your hyperparameters, input settings (like dataset name or model type), and any other independent variables for your experiments. This is useful for analyzing your experiments and reproducing your work in the future. Setting configs also allows you to visualize the relationships between features of your model architecture or data pipeline and model performance.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    Age and Gender Estimation

    Age and Gender Estimation

    Keras implementation of a CNN network for age and gender estimation

    Keras implementation of a CNN network for age and gender estimation. This is a Keras implementation of a CNN for estimating age and gender from a face image [1, 2]. In training, the IMDB-WIKI dataset is used. Because the face images in the UTKFace dataset is tightly cropped (there is no margin around the face region), faces should also be cropped in demo.py if weights trained by the UTKFace dataset is used. Please set the margin argument to 0 for tight cropping. You can evaluate a trained model on the APPA-REAL (validation) dataset. We pose the age regression problem as a deep classification problem followed by a softmax expected value refinement and show improvements over direct regression training of CNNs. Our proposed method, Deep EXpectation (DEX) of apparent age, first detects the face in the test image and then extracts the CNN predictions from an ensemble of 20 networks on the cropped face.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    Arize Phoenix

    Arize Phoenix

    Uncover insights, surface problems, monitor, and fine tune your LLM

    Phoenix provides ML insights at lightning speed with zero-config observability for model drift, performance, and data quality. Phoenix is an Open Source ML Observability library designed for the Notebook. The toolset is designed to ingest model inference data for LLMs, CV, NLP and tabular datasets. It allows Data Scientists to quickly visualize their model data, monitor performance, track down issues & insights, and easily export to improve. Deep Learning Models (CV, LLM, and Generative) are an amazing technology that will power many of future ML use cases. A large set of these technologies are being deployed into businesses (the real world) in what we consider a production setting.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    AtomAI

    AtomAI

    Deep and Machine Learning for Microscopy

    AtomAI is a Pytorch-based package for deep and machine-learning analysis of microscopy data that doesn't require any advanced knowledge of Python or machine learning. The intended audience is domain scientists with a basic understanding of how to use NumPy and Matplotlib. It was developed by Maxim Ziatdinov at Oak Ridge National Lab. The purpose of the AtomAI is to provide an environment that bridges the instrument-specific libraries and general physical analysis by enabling the seamless deployment of machine learning algorithms including deep convolutional neural networks, invariant variational autoencoders, and decomposition/unmixing techniques for image and hyperspectral data analysis. Ultimately, it aims to combine the power and flexibility of the PyTorch deep learning framework and the simplicity and intuitive nature of packages such as scikit-learn, with a focus on scientific data.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    Chronos Forecasting

    Chronos Forecasting

    Pretrained (Language) Models for Probabilistic Time Series Forecasting

    Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    Core ML Tools

    Core ML Tools

    Core ML tools contain supporting tools for Core ML model conversion

    Use Core ML Tools (coremltools) to convert machine learning models from third-party libraries to the Core ML format. This Python package contains the supporting tools for converting models from training libraries. Core ML is an Apple framework to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to fine-tune models, all on the user’s device. Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    D2L.ai

    D2L.ai

    Interactive deep learning book with multi-framework code

    Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 300 universities from 55 countries including Stanford, MIT, Harvard, and Cambridge. This open-source book represents our attempt to make deep learning approachable, teaching you the concepts, the context, and the code. The entire book is drafted in Jupyter notebooks, seamlessly integrating exposition figures, math, and interactive examples with self-contained code. Offers sufficient technical depth to provide a starting point on the path to actually becoming an applied machine learning scientist.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    Darts

    Darts

    A python library for easy manipulation and forecasting of time series

    darts is a Python library for easy manipulation and forecasting of time series. It contains a variety of models, from classics such as ARIMA to deep neural networks. The models can all be used in the same way, using fit() and predict() functions, similar to scikit-learn. The library also makes it easy to backtest models, combine the predictions of several models, and take external data into account. Darts supports both univariate and multivariate time series and models. The ML-based models can be trained on potentially large datasets containing multiple time series, and some of the models offer a rich support for probabilistic forecasting. We recommend to first setup a clean Python environment for your project with at least Python 3.7 using your favorite tool (conda, venv, virtualenv with or without virtualenvwrapper).
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    DeepCTR

    DeepCTR

    Package of deep-learning based CTR models

    DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can be used to easily build custom models. You can use any complex model with model.fit(), and model.predict(). Provide tf.keras.Model like interface for quick experiment. Provide tensorflow estimator interface for large scale data and distributed training. It is compatible with both tf 1.x and tf 2.x. With the great success of deep learning,DNN-based techniques have been widely used in CTR prediction task. The data in CTR estimation task usually includes high sparse,high cardinality categorical features and some dense numerical features. Since DNN are good at handling dense numerical features,we usually map the sparse categorical features to dense numerical through embedding technique.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 23
    DeepPavlov

    DeepPavlov

    A library for deep learning end-to-end dialog systems and chatbots

    DeepPavlov makes it easy for beginners and experts to create dialogue systems. The best place to start is with user-friendly tutorials. They provide quick and convenient introduction on how to use DeepPavlov with complete, end-to-end examples. No installation needed. Guides explain the concepts and components of DeepPavlov. Follow step-by-step instructions to install, configure and extend DeepPavlov framework for your use case. DeepPavlov is an open-source framework for chatbots and virtual assistants development. It has comprehensive and flexible tools that let developers and NLP researchers create production-ready conversational skills and complex multi-skill conversational assistants. Use BERT and other state-of-the-art deep learning models to solve classification, NER, Q&A and other NLP tasks. DeepPavlov Agent allows building industrial solutions with multi-skill integration via API services.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models. Sparse attention of DeepSpeed powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution comparing with dense transformers.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    Delta ML

    Delta ML

    Deep learning based natural language and speech processing platform

    DELTA is a deep learning-based end-to-end natural language and speech processing platform. DELTA aims to provide easy and fast experiences for using, deploying, and developing natural language processing and speech models for both academia and industry use cases. DELTA is mainly implemented using TensorFlow and Python 3. DELTA has been used for developing several state-of-the-art algorithms for publications and delivering real production to serve millions of users. It helps you to train, develop, and deploy NLP and/or speech models. Use configuration files to easily tune parameters and network structures. What you see in training is what you get in serving: all data processing and features extraction are integrated into a model graph. Text classification, named entity recognition, question and answering, text summarization, etc. Uniform I/O interfaces and no changes for new models.
    Downloads: 2 This Week
    Last Update:
    See Project