Showing 273 open source projects for "model train design"

View related business solutions
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 1
    OpenVINO Training Extensions

    OpenVINO Training Extensions

    Trainable models and NN optimization tools

    OpenVINO™ Training Extensions provide a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference. When ote_cli is installed in the virtual environment, you can use the ote command line interface to perform various actions for templates related to the chosen task type, such as running, training, evaluating, exporting, etc. ote train trains a model (a particular model template) on a dataset and saves results in two files. ote optimize optimizes a pre-trained model using NNCF or POT depending on the model format. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    Train machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Denoising Diffusion Probabilistic Model

    Denoising Diffusion Probabilistic Model

    Implementation of Denoising Diffusion Probabilistic Model in Pytorch

    Implementation of Denoising Diffusion Probabilistic Model in Pytorch. It is a new approach to generative modeling that may have the potential to rival GANs. It uses denoising score matching to estimate the gradient of the data distribution, followed by Langevin sampling to sample from the true distribution. If you simply want to pass in a folder name and the desired image dimensions, you can use the Trainer class to easily train a model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    MiniMind

    MiniMind

    Train a 26M-parameter GPT from scratch in just 2h

    minimind is a framework that enables users to train a 26-million-parameter GPT (Generative Pre-trained Transformer) model from scratch in approximately two hours. It provides a streamlined process for data preparation, model training, and evaluation, making it accessible for individuals and organizations to develop their own language models without extensive computational resources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • The Original Buy Center Software. Icon
    The Original Buy Center Software.

    Never Go To The Auction Again.

    VAN sources private-party vehicles from over 20 platforms and provides all necessary tools to communicate with sellers and manage opportunities. Franchise and Independent dealers can boost their buy center strategies with our advanced tools and an experienced Acquisition Coaching™ team dedicated to your success.
    Learn More
  • 5
    Large Concept Model

    Large Concept Model

    Language modeling in a sentence representation space

    ...It includes utilities to build concept vocabularies, map supervision signals to those vocabularies, and measure zero-shot or few-shot generalization. Probing tools help diagnose what the model knows—e.g., attribute recognition, relation understanding, or compositionality—so you can iterate on data and objectives. The design is modular, making it straightforward to swap backbones, change objectives, or integrate retrieval components.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Determined

    Determined

    Determined, deep learning training platform

    ...Deploy your model using Determined's built-in model registry. Easily share on-premise or cloud GPUs with your team. Determined’s cluster scheduling offers first-class support for deep learning and seamless spot instance support. Check out examples of how you can use Determined to train popular deep learning models at scale.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Model Context Protocol TypeScript SDK

    Model Context Protocol TypeScript SDK

    The official Typescript SDK for Model Context Protocol servers

    The TypeScript SDK for Model Context Protocol simplifies integration with the Model Context Protocol, enabling developers to interact with AI models effectively.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Autodistill

    Autodistill

    Images to inference with no labeling

    Autodistill uses big, slower foundation models to train small, faster supervised models. Using autodistill, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between. You can use Autodistill on your own hardware, or use the Roboflow hosted version of Autodistill to label images in the cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    GluonTS

    GluonTS

    Probabilistic time series modeling in Python

    GluonTS is a Python package for probabilistic time series modeling, focusing on deep learning based models. GluonTS requires Python 3.6 or newer, and the easiest way to install it is via pip. We train a DeepAR-model and make predictions using the simple "airpassengers" dataset. The dataset consists of a single time-series, containing monthly international passengers between the years 1949 and 1960, a total of 144 values (12 years * 12 months). We split the dataset into train and test parts, by removing the last three years (36 months) from the train data. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • D&B Hoovers is Your Sales Accelerator Icon
    D&B Hoovers is Your Sales Accelerator

    For sales teams that want to accelerate B2B sales with better data

    Speed up sales prospecting with the rich audience targeting capabilities of D&B Hoovers so you can spend more sales time closing.
    Learn More
  • 10
    GPT-SoVITS

    GPT-SoVITS

    1 min voice data can also be used to train a good TTS model

    GPT‑SoVITS is a state-of-the-art voice conversion and TTS system that enables zero‑shot and few‑shot synthesis based on a short vocal sample (e.g., 5 seconds). It supports cross‑lingual speech synthesis across English, Chinese, Japanese, Korean, Cantonese, and more. It's powered by VITS architecture enhanced for few‑sample adaptation and real‑time usability.
    Downloads: 39 This Week
    Last Update:
    See Project
  • 11
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Hivemind

    Hivemind

    Decentralized deep learning in PyTorch. Built to train models

    ...Decentralized parameter averaging: iteratively aggregate updates from multiple workers without the need to synchronize across the entire network. Train neural networks of arbitrary size: parts of their layers are distributed across the participants with the Decentralized Mixture-of-Experts. If you have succesfully trained a model or created a downstream repository with the help of our library, feel free to submit a pull request that adds your project to the list.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    MiniMind-V

    MiniMind-V

    "Big Model" trains a visual multimodal VLM with 26M parameters

    MiniMind-V is an experimental open-source project that aims to train a very small multimodal vision–language model (VLM) from scratch with extremely low compute and cost, making research and experimentation accessible to more people. The repository showcases training workflows and code designed to produce a 26-million parameter model—including both image and text capabilities—using minimal resources in very little time, reflecting a trend toward democratizing AI research. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    TorchDistill

    TorchDistill

    A coding-free framework built on PyTorch

    torchdistill (formerly kdkit) offers various state-of-the-art knowledge distillation methods and enables you to design (new) experiments simply by editing a declarative yaml config file instead of Python code. Even when you need to extract intermediate representations in teacher/student models, you will NOT need to reimplement the models, which often change the interface of the forward, but instead specify the module path(s) in the yaml file. In addition to knowledge distillation, this framework helps you design and perform general deep learning experiments (WITHOUT coding) for reproducible deep learning studies. i.e., it enables you to train models without teachers simply by excluding teacher entries from a declarative yaml config file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Lama Cleaner

    Lama Cleaner

    Image inpainting tool powered by SOTA AI Model

    Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, or people from your pictures or erase and replace(powered by stable diffusion) anything on your pictures. Lama Cleaner is a free, open-source and fully self-hostable inpainting tool powered by state-of-the-art AI models. You can use it to remove any unwanted object, defect, or people from your pictures or erase and replace anything on your pictures. Many AICG creators are using Lama Cleaner to clean-up their...
    Downloads: 41 This Week
    Last Update:
    See Project
  • 16
    Upscayl

    Upscayl

    Free and Open Source AI Image Upscaler for Linux, MacOS and Windows

    Free and Open Source AI Image Upscaler for Linux, MacOS and Windows built with Linux-First philosophy. Upscayl is a cross-platform application built with the Linux-first philosophy. This means that we prioritize Linux builds over others but that doesn't mean we'll break things for other OSes. Upscayl does not work without a GPU, sorry. You'll need a Vulkan-compatible GPU to upscale images. CPU or iGPU won't work. You can also download the flatpak version and double-click the flatpak file to...
    Downloads: 169 This Week
    Last Update:
    See Project
  • 17
    Tokenizers

    Tokenizers

    Fast State-of-the-Art Tokenizers optimized for Research and Production

    ...Even with destructive normalization, it’s always possible to get the part of the original sentence that corresponds to any token. Does all the pre-processing: Truncation, Padding, add the special tokens your model needs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Transformers

    Transformers

    State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX

    Transformers provides APIs and tools to easily download and train state-of-the-art pre-trained models. Using pre-trained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities. Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 19
    Rhino

    Rhino

    On-device Speech-to-Intent engine powered by deep learning

    ...It directly infers intent from spoken commands within a given context of interest, in real-time. The end-to-end platform for embedding private voice AI into any software in a few lines of code. Design with no limits on top of a modular platform. Create use-case-specific voice AI models in seconds. Develop voice features with a few lines of code using intuitive and cross-platform SDKs. Deliver voice AI everywhere: on-device, mobile, web browsers, on-premise, or cloud. Measure adoption, learn, and iterate. Continuously re-design and re-train to optimize engagement. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    DeepSeek VL

    DeepSeek VL

    Towards Real-World Vision-Language Understanding

    ...The repository includes model weights (or pointers to them), evaluation metrics on standard vision + language benchmarks, and configuration or architecture files. It also supports inference tools for forwarding image + prompt through the model to produce text output. DeepSeek-VL is a predecessor to their newer VL2 model, and presumably shares core design philosophy but with earlier scaling, fewer enhancements, or capability tradeoffs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    SAM 3

    SAM 3

    Code for running inference and finetuning with SAM 3 model

    SAM 3 (Segment Anything Model 3) is a unified foundation model for promptable segmentation in both images and videos, capable of detecting, segmenting, and tracking objects. It accepts both text prompts (open-vocabulary concepts like “red car” or “goalkeeper in white”) and visual prompts (points, boxes, masks) and returns high-quality masks, boxes, and scores for the requested concepts. Compared with SAM 2, SAM 3 introduces the ability to exhaustively segment all instances of an...
    Downloads: 109 This Week
    Last Update:
    See Project
  • 22
    FastMCP

    FastMCP

    The fast, Pythonic way to build Model Context Protocol servers

    ...The framework is opinionated by design, ensuring best practices and protocol compliance are the default rather than an extra burden. Actively maintained and widely adopted, FastMCP powers a majority of MCP servers and has become the de facto standard for production-ready MCP applications.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 23
    Cleanlab

    Cleanlab

    The standard data-centric AI package for data quality and ML

    ...To facilitate machine learning with messy, real-world data, this data-centric AI package uses your existing models to estimate dataset problems that can be fixed to train even better models. cleanlab cleans your data's labels via state-of-the-art confident learning algorithms, published in this paper and blog. See some of the datasets cleaned with cleanlab at labelerrors.com. This package helps you find label issues and other data issues, so you can train reliable ML models. All features of cleanlab work with any dataset and any model. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    PyTorch Image Models

    PyTorch Image Models

    The largest collection of PyTorch image encoders / backbones

    timm (PyTorch Image Models) is a premier library hosting a vast collection of state-of-the-art image classification models and backbones such as ResNet, EfficientNet, NFNet, Vision Transformer, ConvNeXt, and more. Created by Ross Wightman and now maintained by Hugging Face, it includes pretrained weights, data loaders, augmentations, optimizers, schedulers, and reference scripts for training, evaluation, inference, and model export. It's an essential toolkit for vision research and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next