Showing 76 open source projects for "model train design"

View related business solutions
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • Lightspeed golf course management software Icon
    Lightspeed golf course management software

    Lightspeed Golf is all-in-one golf course management software to help courses simplify operations, drive revenue and deliver amazing golf experiences.

    From tee sheet management, point of sale and payment processing to marketing, automation, reporting and more—Lightspeed is built for the pro shop, restaurant, back office, beverage cart and beyond.
    Learn More
  • 1
    OpenVINO Training Extensions

    OpenVINO Training Extensions

    Trainable models and NN optimization tools

    OpenVINO™ Training Extensions provide a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference. When ote_cli is installed in the virtual environment, you can use the ote command line interface to perform various actions for templates related to the chosen task type, such as running, training, evaluating, exporting, etc. ote train trains a model (a particular model template) on a dataset and saves results in two files. ote optimize optimizes a pre-trained model using NNCF or POT depending on the model format. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    Train machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Denoising Diffusion Probabilistic Model

    Denoising Diffusion Probabilistic Model

    Implementation of Denoising Diffusion Probabilistic Model in Pytorch

    Implementation of Denoising Diffusion Probabilistic Model in Pytorch. It is a new approach to generative modeling that may have the potential to rival GANs. It uses denoising score matching to estimate the gradient of the data distribution, followed by Langevin sampling to sample from the true distribution. If you simply want to pass in a folder name and the desired image dimensions, you can use the Trainer class to easily train a model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Determined

    Determined

    Determined, deep learning training platform

    ...Deploy your model using Determined's built-in model registry. Easily share on-premise or cloud GPUs with your team. Determined’s cluster scheduling offers first-class support for deep learning and seamless spot instance support. Check out examples of how you can use Determined to train popular deep learning models at scale.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Nonprofit Budgeting Software Icon
    Nonprofit Budgeting Software

    Martus Solutions provides seamless budgeting, reporting, and forecasting tools that integrate with accounting systems for real-time financial insights

    Martus' collaborative and easy-to-use budgeting and reporting platform will save you hundreds of hours each year. It's designed to make the entire budgeting process easier and create unlimited financial transparency.
    Learn More
  • 5
    Autodistill

    Autodistill

    Images to inference with no labeling

    Autodistill uses big, slower foundation models to train small, faster supervised models. Using autodistill, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between. You can use Autodistill on your own hardware, or use the Roboflow hosted version of Autodistill to label images in the cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    GluonTS

    GluonTS

    Probabilistic time series modeling in Python

    GluonTS is a Python package for probabilistic time series modeling, focusing on deep learning based models. GluonTS requires Python 3.6 or newer, and the easiest way to install it is via pip. We train a DeepAR-model and make predictions using the simple "airpassengers" dataset. The dataset consists of a single time-series, containing monthly international passengers between the years 1949 and 1960, a total of 144 values (12 years * 12 months). We split the dataset into train and test parts, by removing the last three years (36 months) from the train data. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Hivemind

    Hivemind

    Decentralized deep learning in PyTorch. Built to train models

    ...Decentralized parameter averaging: iteratively aggregate updates from multiple workers without the need to synchronize across the entire network. Train neural networks of arbitrary size: parts of their layers are distributed across the participants with the Decentralized Mixture-of-Experts. If you have succesfully trained a model or created a downstream repository with the help of our library, feel free to submit a pull request that adds your project to the list.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Transformers

    Transformers

    State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX

    Transformers provides APIs and tools to easily download and train state-of-the-art pre-trained models. Using pre-trained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities. Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • Reach Your Audience with Rise Vision, the #1 Cloud Digital Signage Software Solution Icon
    Reach Your Audience with Rise Vision, the #1 Cloud Digital Signage Software Solution

    K-12 Schools, Higher Education, Businesses, Restaurants

    Rise Vision is the #1 digital signage company, offering easy-to-use cloud digital signage software compatible with any player across multiple screens. Forget about static displays. Save time and boost sales with 500+ customizable content templates for your screens. If you ever need help, get free training and exceptionally fast support.
    Learn More
  • 10
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    deepfakes_faceswap

    deepfakes_faceswap

    Deepfakes Software For All

    Faceswap is the leading free and open source multi-platform deepfakes software. When faceswapping was first developed and published, the technology was groundbreaking, it was a huge step in AI development. It was also completely ignored outside of academia because the code was confusing and fragmentary. It required a thorough understanding of complicated AI techniques and took a lot of effort to figure it out. Until one individual brought it together into a single, cohesive collection.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    Qualcomm Innovation Center (QuIC) is at the forefront of enabling low-power inference at the edge through its pioneering model-efficiency research. QuIC has a mission to help migrate the ecosystem toward fixed-point inference. With this goal, QuIC presents the AI Model Efficiency Toolkit (AIMET) - a library that provides advanced quantization and compression techniques for trained neural network models. AIMET enables neural networks to run more efficiently on fixed-point AI hardware...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    PyG

    PyG

    Graph Neural Network Library for PyTorch

    ...All it takes is 10-20 lines of code to get started with training a GNN model (see the next section for a quick tour).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Imagen - Pytorch

    Imagen - Pytorch

    Implementation of Imagen, Google's Text-to-Image Neural Network

    ...It is the new SOTA for text-to-image synthesis. Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pre-trained T5 model (attention network). It also contains dynamic clipping for improved classifier-free guidance, noise level conditioning, and a memory-efficient unit design. It appears neither CLIP nor prior network is needed after all. And so research continues. For simpler training, you can directly supply text strings instead of precomputing text encodings. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Materials Discovery: GNoME

    Materials Discovery: GNoME

    AI discovers 520000 stable inorganic crystal structures for research

    ...Using GNoME, DeepMind identified 381,000 new stable materials, later expanding the dataset to include over 520,000 materials within 1 meV/atom of the convex hull as of August 2024. The repository provides datasets, model definitions, and interactive Colabs for exploring these materials, computing decomposition energies, and visualizing chemical families. Additionally, it includes JAX-based implementations of GNoME and Nequip—the latter being used to train interatomic potentials for dynamic simulations.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 16
    BudouX

    BudouX

    Standalone, small, language-neutral

    ...It works with no dependency on third-party word segmenters such as Google cloud natural language API. It is small. It takes only around 15 KB including its machine learning model. It's reasonable to use it even on the client-side. It is language-neutral. You can train a model for any language by feeding a dataset to BudouX’s training script.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    TextAttack

    TextAttack

    Python framework for adversarial attacks, and data augmentation

    Generating adversarial examples for NLP models. TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Raster Vision

    Raster Vision

    Open source framework for deep learning satellite and aerial imagery

    Raster Vision is an open source framework for Python developers building computer vision models on satellite, aerial, and other large imagery sets (including oblique drone imagery). There is built-in support for chip classification, object detection, and semantic segmentation using PyTorch. Raster Vision allows engineers to quickly and repeatably configure pipelines that go through core components of a machine learning workflow: analyzing training data, creating training chips, training...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    YOLOv5

    YOLOv5

    YOLOv5 is the world's most loved vision AI

    Introducing Ultralytics YOLOv8, the latest version of the acclaimed real-time object detection and image segmentation model. YOLOv8 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs. Explore the YOLOv8 Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. ...
    Downloads: 59 This Week
    Last Update:
    See Project
  • 20
    Opacus

    Opacus

    Training PyTorch models with differential privacy

    Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment. Vectorized per-sample gradient computation that is 10x faster than micro batching. Supports most types of PyTorch models and can be used with minimal modification to the original neural network. Open source,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    mlforecast

    mlforecast

    Scalable machine learning for time series forecasting

    ...Instead of writing custom code to build lagged features, rolling statistics, and date-based predictors, mlforecast generates those automatically based on a simple configuration. It supports multi-series forecasting, meaning you can train one model that forecasts many time series at once (common in retail, demand forecasting, etc.), rather than one model per series. The library is built to scale: behind the scenes, it can leverage distributed computing frameworks (Spark, Dask, Ray) when datasets or the number of series grow large.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    PyKEEN

    PyKEEN

    A Python library for learning and evaluating knowledge graph embedding

    PyKEEN (Python KnowlEdge EmbeddiNgs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-modal information). PyKEEN is a Python package for reproducible, facile knowledge graph embeddings. PyKEEN has a function pykeen.env() that magically prints relevant version information about PyTorch, CUDA, and your operating system that can be used for debugging. If you’re in a Jupyter Notebook, it will be pretty-printed as an HTML table.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    pomegranate

    pomegranate

    Fast, flexible and easy to use probabilistic modelling in Python

    ...Together, these two design choices enable a flexibility not seen in any other probabilistic modeling package.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    tsai

    tsai

    Time series Timeseries Deep Learning Machine Learning Pytorch fastai

    ...If you require any of the dependencies that is not installed, tsai will ask you to install it when necessary) We've also added a new PredictionDynamics callback that will display the predictions during training. This is the type of output you would get in a classification task. New tutorial notebook on how to train your model with larger-than-memory datasets in less time achieving up to 100% GPU usage! See our new tutorial notebook on how to track your experiments with Weights & Biases
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    MLPerf

    MLPerf

    Reference implementations of MLPerf™ training benchmarks

    This is a repository of reference implementations for the MLPerf training benchmarks. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware. Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry. The MLPerf Training working group draws on...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • Next