Search Results for "model train design" - Page 3

Showing 286 open source projects for "model train design"

View related business solutions
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • Vibes don’t ship, Retool does Icon
    Vibes don’t ship, Retool does

    Start from a prompt and build production-ready apps on your data—with security, permissions, and compliance built in.

    Vibe coding tools create cool demos, but Retool helps you build software your company can actually use. Generate internal apps that connect directly to your data—deployed in your cloud with enterprise security from day one. Build dashboards, admin panels, and workflows with granular permissions already in place. Stop prototyping and ship on a platform that actually passes security review.
    Build apps that ship
  • 1
    MobileLLM

    MobileLLM

    MobileLLM Optimizing Sub-billion Parameter Language Models

    ...The framework integrates several architectural innovations—SwiGLU activation, deep and thin network design, embedding sharing, and grouped-query attention (GQA)—to achieve a superior trade-off between model size, inference speed, and accuracy. MobileLLM demonstrates remarkable performance, with the 125M and 350M variants outperforming previous state-of-the-art models of the same scale by up to 4.3% on zero-shot commonsense reasoning tasks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    MiniMax-01

    MiniMax-01

    Large-language-model & vision-language-model based on Linear Attention

    MiniMax-01 is the official repository for two flagship models: MiniMax-Text-01, a long-context language model, and MiniMax-VL-01, a vision-language model built on top of it. MiniMax-Text-01 uses a hybrid attention architecture that blends Lightning Attention, standard softmax attention, and Mixture-of-Experts (MoE) routing to achieve both high throughput and long-context reasoning. It has 456 billion total parameters with 45.9 billion activated per token and is trained with advanced parallel...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    FLUX.2

    FLUX.2

    Official inference repo for FLUX.2 models

    FLUX.2 is a state-of-the-art open-weight image generation and editing model released by Black Forest Labs aimed at bridging the gap between research-grade capabilities and production-ready workflows. The model offers both text-to-image generation and powerful image editing, including editing of multiple reference images, with fidelity, consistency, and realism that push the limits of what open-source generative models have achieved. It supports high-resolution output (up to ~4 megapixels),...
    Downloads: 61 This Week
    Last Update:
    See Project
  • 4
    mlforecast

    mlforecast

    Scalable machine learning for time series forecasting

    ...Instead of writing custom code to build lagged features, rolling statistics, and date-based predictors, mlforecast generates those automatically based on a simple configuration. It supports multi-series forecasting, meaning you can train one model that forecasts many time series at once (common in retail, demand forecasting, etc.), rather than one model per series. The library is built to scale: behind the scenes, it can leverage distributed computing frameworks (Spark, Dask, Ray) when datasets or the number of series grow large.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Find Hidden Risks in Windows Task Scheduler Icon
    Find Hidden Risks in Windows Task Scheduler

    Free diagnostic script reveals configuration issues, error patterns, and security risks. Instant HTML report.

    Windows Task Scheduler might be hiding critical failures. Download the free JAMS diagnostic tool to uncover problems before they impact production—get a color-coded risk report with clear remediation steps in minutes.
    Download Free Tool
  • 5
    PyKEEN

    PyKEEN

    A Python library for learning and evaluating knowledge graph embedding

    PyKEEN (Python KnowlEdge EmbeddiNgs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-modal information). PyKEEN is a Python package for reproducible, facile knowledge graph embeddings. PyKEEN has a function pykeen.env() that magically prints relevant version information about PyTorch, CUDA, and your operating system that can be used for debugging. If you’re in a Jupyter Notebook, it will be pretty-printed as an HTML table.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Ludwig AI

    Ludwig AI

    Low-code framework for building custom LLMs, neural networks

    ...Retain full control of your models down to the activation functions. Support for hyperparameter optimization, explainability, and rich metric visualizations. Experiment with different model architectures, tasks, features, and modalities with just a few parameter changes in the config. Think building blocks for deep learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    pomegranate

    pomegranate

    Fast, flexible and easy to use probabilistic modelling in Python

    ...Together, these two design choices enable a flexibility not seen in any other probabilistic modeling package.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Step-Audio-EditX

    Step-Audio-EditX

    LLM-based Reinforcement Learning audio edit model

    Step-Audio-EditX is an open-source, 3 billion-parameter audio model from StepFun AI designed to make expressive and precise editing of speech and audio as easy as text editing. Rather than treating audio editing as low-level waveform manipulation, this model converts speech into a sequence of discrete “audio tokens” (via a dual-codebook tokenizer) — combining a linguistic token stream and a semantic (prosody/emotion/style) token stream — thereby abstracting audio editing into high-level...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Janus

    Janus

    Unified Multimodal Understanding and Generation Models

    Janus is a sophisticated open-source project from DeepSeek AI that aims to unify both visual understanding and image generation in a single model architecture. Rather than having separate systems for “look and describe” and “prompt and generate”, Janus uses an autoregressive transformer framework with a decoupled visual encoder—allowing it to ingest images for comprehension and to produce images from text prompts with shared internal representations. The design tackles long-standing conflicts in multimodal models: namely that the visual encoder has to serve both analysis (understanding) and synthesis (generation) roles. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 10
    DeepSpeed

    DeepSpeed

    Deep learning optimization library: makes distributed training easy

    DeepSpeed is an easy-to-use deep learning optimization software suite that enables unprecedented scale and speed for Deep Learning Training and Inference. With DeepSpeed you can: 1. Train/Inference dense or sparse models with billions or trillions of parameters 2. Achieve excellent system throughput and efficiently scale to thousands of GPUs 3. Train/Inference on resource constrained GPU systems 4. Achieve unprecedented low latency and high throughput for inference 5. Achieve extreme compression for an unparalleled inference latency and model size reduction with low costs DeepSpeed offers a confluence of system innovations, that has made large scale DL training effective, and efficient, greatly improved ease of use, and redefined the DL training landscape in terms of scale that is possible. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Eigent

    Eigent

    The Open Source Cowork Desktop to Unlock Your Exceptional Productivity

    ...It enables multiple specialized AI agents to collaborate in parallel, turning complex workflows into automated, end-to-end tasks. Built on the CAMEL-AI multi-agent framework, Eigent emphasizes productivity, flexibility, and transparent system design. You can run Eigent fully locally for maximum privacy and data control, or choose a cloud-connected experience for quick access. The platform supports a wide range of AI models and integrates powerful tools through the Model Context Protocol (MCP). With human-in-the-loop controls and enterprise-ready features, Eigent balances automation with oversight and security.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 12
    TorchQuantum

    TorchQuantum

    A PyTorch-based framework for Quantum Classical Simulation

    A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers. Researchers on quantum algorithm design, parameterized quantum circuit training, quantum optimal control, quantum machine learning, and quantum neural networks. Dynamic computation graph, automatic gradient computation, fast GPU support, batch model terrorized processing.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    smolagents

    smolagents

    Agents write python code to call tools and orchestrate other agents

    ...We provide our definition in this page, where you’ll also find tips for when to use them or not (spoilers: you’ll often be better off without agents). smolagents is a lightweight framework for building AI agents using large language models (LLMs). It simplifies the development of AI-driven applications by providing tools to create, train, and deploy language model-based agents.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    NovaSR

    NovaSR

    A lightning fast audio upsampler

    NovaSR is an extremely lightweight and high-performance audio upsampling model that transforms low-quality 16 kHz audio into clearer, high-fidelity 48 kHz audio with remarkable speed and efficiency. At only about 50 KB in size, the model is orders of magnitude smaller than typical audio super-resolution networks, yet it achieves high quality and realtime performance thanks to its compact architecture and efficient convolutional design.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Bert-VITS2

    Bert-VITS2

    VITS2 backbone with multilingual-bert

    ...The core idea is to use BERT-style contextual embeddings for text encoding while relying on a refined VITS2 architecture for acoustic generation and vocoding. The repository includes everything needed to train, fine-tune, and run the model, from configuration files to preprocessing scripts, spectrogram utilities, and training entrypoints for multi-GPU and multi-node setups. It provides emotional modeling through “emo embeddings,” allowing voices to be conditioned on different affective states during synthesis. Releases include optimizations for Japanese and English alignment, expanded training data, spec caching and pre-generation tools, as well as ONNX export for more lightweight inference deployments.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    PyTorch3D

    PyTorch3D

    PyTorch3D is FAIR's library of reusable components for deep learning

    ...PyTorch3D also includes utilities for loading, transforming, and sampling 3D assets, so models can be trained end-to-end from 2D supervision or partial data. Its modular design allows easy extension—components like differentiable rasterizers, mesh blending, or signed distance field (SDF) modules can be swapped or combined to test new architectures quickly.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    BioNeMo

    BioNeMo

    BioNeMo Framework: For building and adapting AI models

    BioNeMo is an AI-powered framework developed by NVIDIA for protein and molecular generation using deep learning models. It provides researchers and developers with tools to design, analyze, and optimize biological molecules, aiding in drug discovery and synthetic biology applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    tsai

    tsai

    Time series Timeseries Deep Learning Machine Learning Pytorch fastai

    ...If you require any of the dependencies that is not installed, tsai will ask you to install it when necessary) We've also added a new PredictionDynamics callback that will display the predictions during training. This is the type of output you would get in a classification task. New tutorial notebook on how to train your model with larger-than-memory datasets in less time achieving up to 100% GPU usage! See our new tutorial notebook on how to track your experiments with Weights & Biases
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    MLPerf

    MLPerf

    Reference implementations of MLPerf™ training benchmarks

    This is a repository of reference implementations for the MLPerf training benchmarks. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware. Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry. The MLPerf Training working group draws on...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    LuxTTS

    LuxTTS

    A high-quality rapid TTS voice cloning model

    ...Intended for developers, hobbyists, and creators, the repository includes installation instructions, usage examples, and Python APIs that make it feasible to integrate the model in local workflows, web demos, or production systems. Its design emphasizes efficiency and practicality, fitting within modest GPU memory footprints.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    Tunix

    Tunix

    A JAX-native LLM Post-Training Library

    Tunix is a JAX-native library for post-training large language models, bringing supervised fine-tuning, reinforcement learning–based alignment, and knowledge distillation into one coherent toolkit. It embraces JAX’s strengths—functional programming, jit compilation, and effortless multi-device execution—so experiments scale from a single GPU to pods of TPUs with minimal code changes. The library is organized around modular pipelines for data loading, rollout, optimization, and evaluation,...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    DeepEP

    DeepEP

    DeepEP: an efficient expert-parallel communication library

    DeepEP is a communication library designed specifically to support Mixture-of-Experts (MoE) and expert parallelism (EP) deployments. Its core role is to implement high-throughput, low-latency all-to-all GPU communication kernels, which handle the dispatching of tokens to different experts (or shards) and then combining expert outputs back into the main data flow. Because MoE architectures require routing inputs to different experts, communication overhead can become a bottleneck — DeepEP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Real-ESRGAN GUI

    Real-ESRGAN GUI

    Cross-platform GUI for image upscaler Real-ESRGAN

    ...Real-ESRGAN can only enlarge the input image with a fixed 2-4x magnification (related to the selected model). This functionality is achieved by downsampling using a conventional scaling algorithm after multiple calls to Real-ESRGAN. Split each frame of the GIF and record the duration, zoom in one by one and then merge. Drag an image file or directory to any position in the window, and its path can be automatically set as the input.
    Downloads: 135 This Week
    Last Update:
    See Project
  • 24
    Segmentation Models

    Segmentation Models

    Segmentation models with pretrained backbones. PyTorch

    ...Preparing your data the same way as during weights pre-training may give you better results (higher metric score and faster convergence). It is not necessary in case you train the whole model, not only the decoder. Pytorch Image Models (a.k.a. timm) has a lot of pretrained models and interface which allows using these models as encoders in smp, however, not all models are supported. Input channels parameter allows you to create models, which process tensors with an arbitrary number of channels.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    NeuralForecast

    NeuralForecast

    Scalable and user friendly neural forecasting algorithms.

    NeuralForecast offers a large collection of neural forecasting models focusing on their performance, usability, and robustness. The models range from classic networks like RNNs to the latest transformers: MLP, LSTM, GRU, RNN, TCN, TimesNet, BiTCN, DeepAR, NBEATS, NBEATSx, NHITS, TiDE, DeepNPTS, TSMixer, TSMixerx, MLPMultivariate, DLinear, NLinear, TFT, Informer, AutoFormer, FedFormer, PatchTST, iTransformer, StemGNN, and TimeLLM. There is a shared belief in Neural forecasting methods'...
    Downloads: 4 This Week
    Last Update:
    See Project