Showing 18 open source projects for "weight training workout"

View related business solutions
  • Compliant and Reliable File Transfers Backed by Top Security Certifications Icon
    Compliant and Reliable File Transfers Backed by Top Security Certifications

    Cerberus FTP Server delivers SOC 2 Type II certified security and FIPS 140-2 validated encryption.

    Stop relying on non-certified, legacy file transfer tools that creak under the weight of modern security demands. Get full audit trails, advanced access controls and more supported by an award-winning team of experts. Start your free 25-day trial today.
    Start Free Trial
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 1
    DeepSeek V2

    DeepSeek V2

    Strong, Economical, and Efficient Mixture-of-Experts Language Model

    DeepSeek-V2 is the second major iteration of DeepSeek’s foundation language model (LLM) series. This version likely includes architectural improvements, training enhancements, and expanded dataset coverage compared to V1. The repository includes model weight artifacts, evaluation benchmarks across a broad suite (e.g. reasoning, math, multilingual), configuration files, and possibly tokenization / inference scripts. The V2 model is expected to support more advanced features like better context window handling, more efficient inference, better performance on challenging tasks, and stronger alignment with human feedback. ...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 2
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is presented as the world’s first open-weight, large-scale hybrid-attention reasoning model, designed to push the frontier of long-context, tool-using, and deeply “thinking” language models. It is built on the MiniMax-Text-01 foundation and keeps the same massive parameter budget, but reworks the attention and training setup for better reasoning and test-time compute scaling.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Bootstrap Your Own Latent (BYOL)

    Bootstrap Your Own Latent (BYOL)

    Usable Implementation of "Bootstrap Your Own Latent" self-supervised

    ...This repository offers a module that one can easily wrap any image-based neural network (residual network, discriminator, policy network) to immediately start benefitting from unlabelled image data. There is now new evidence that batch normalization is key to making this technique work well. A new paper has successfully replaced batch norm with group norm + weight standardization, refuting that batch statistics are needed for BYOL to work. Simply plugin your neural network, specifying (1) the image dimensions as well as (2) the name (or index) of the hidden layer, whose output is used as the latent representation used for self-supervised training.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    MatMul-Free LM

    MatMul-Free LM

    Implementation for MatMul-free LM

    MatMul-Free LM is an experimental implementation of a large language model architecture designed to eliminate traditional matrix multiplication operations used in transformer networks. Since matrix multiplication is one of the most computationally expensive components of modern language models, the project explores alternative computational strategies that reduce hardware requirements while maintaining comparable performance. The architecture relies on quantization-aware training and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 5
    CogVideo

    CogVideo

    Text and image to video generation: CogVideoX and CogVideo

    CogVideo is an open-source family of advanced video generation models that can create videos from text, images, or existing video inputs. Built on large-scale Transformer and diffusion architectures, it enables multimodal generation across text-to-video, image-to-video, and video continuation tasks. The latest CogVideoX models offer higher resolution outputs, longer video durations, and improved controllability through prompt engineering. The project includes tools for inference,...
    Downloads: 21 This Week
    Last Update:
    See Project
  • 6
    BitNet

    BitNet

    BitNet: Scaling 1-bit Transformers for Large Language Models

    BitNet is a machine learning research implementation that explores extremely low-precision neural network architectures designed to dramatically reduce the computational cost of large language models. The project implements the BitNet architecture described in research on scaling transformer models using extremely low-bit quantization techniques. In this approach, neural network weights are quantized to approximately one bit per parameter, allowing models to operate with far lower memory...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    Attention Residuals (AttnRes)

    Attention Residuals (AttnRes)

    Drop-in replacement for standard residual connections in Transformers

    Attention Residuals is a research-driven architectural innovation for transformer-based models that replaces traditional residual connections with an attention-based mechanism to improve information flow across layers. In standard transformers, residual connections simply sum outputs from previous layers, which can lead to uncontrolled growth of hidden states and dilution of early-layer information in deep networks. Attention Residuals introduces a learnable softmax attention mechanism that...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    LLM-Pruner

    LLM-Pruner

    On the Structural Pruning of Large Language Models

    ...The framework relies on gradient-based analysis to determine which parameters contribute least to model performance, enabling targeted structural pruning rather than simple weight removal. After pruning, the framework applies lightweight fine-tuning methods such as LoRA to recover performance using relatively small datasets and short training times.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    UCCL

    UCCL

    UCCL is an efficient communication library for GPUs

    ...The system also supports specialized workloads such as reinforcement learning weight transfers, key-value cache sharing, and expert parallelism for mixture-of-experts models. Its architecture emphasizes flexibility and extensibility so that developers can implement custom communication protocols tailored to specific machine learning workloads.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 10
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    FastChat is an open platform for training, serving, and evaluating large language model-based chatbots. If you do not have enough memory, you can enable 8-bit compression by adding --load-8bit to the commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/T4/V100(16GB) GPU. In addition to that, you can add --cpu-offloading to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Bert-VITS2

    Bert-VITS2

    VITS2 backbone with multilingual-bert

    Bert-VITS2 is a neural text-to-speech project that combines a VITS2 backbone with a multilingual BERT front-end to produce high-quality speech in multiple languages. The core idea is to use BERT-style contextual embeddings for text encoding while relying on a refined VITS2 architecture for acoustic generation and vocoding. The repository includes everything needed to train, fine-tune, and run the model, from configuration files to preprocessing scripts, spectrogram utilities, and training...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    TensorFlow Object Counting API

    TensorFlow Object Counting API

    The TensorFlow Object Counting API is an open source framework

    ...Please contact if you need professional object detection & tracking & counting project with super high accuracy and reliability! You can train TensorFlow models with your own training data to built your own custom object counter system! If you want to learn how to do it, please check one of the sample projects, which cover some of the theory of transfer learning and show how to apply it in useful projects. The development is on progress! The API will be updated soon, the more talented and light-weight API will be available in this repo! ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    kcws

    kcws

    Deep Learning Chinese Word Segment

    Deep learning chinese word segment. Install the bazel code construction tool and install tensorflow (currently this project requires tf 1.0.0alpha version or above) Switch to the code directory of this project and run ./configure. Compile background service. Pay attention to the public account of waiting for words and reply to kcws to get the corpus download address. Extract the corpus to a directory. Change to the code directory.After installing tensorflow, switch to the kcws code...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14

    DE-HEoC

    DE-based Weight Optimisation for Heterogeneous Ensemble

    We propose the use of Differential Evolution algorithm for the weight adjustment of base classifiers used in weighted voting heterogeneous ensemble of classifier. Average Matthews Correlation Coefficient (MCC) score, calculated over 10-fold cross-validation, has been used as the measure of quality of an ensemble. DE/rand/1/bin algorithm has been utilised to maximize the average MCC score calculated using 10-fold cross-validation on training dataset.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15

    cognity

    A neural network library for Java.

    Cognity is an object-oriented neural network library for Java. It's goal is to provide easy-to-use, high level architecture for neural network computations along with reasonable performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    NeuroBox is an .NET OOP Library to generate, propagate and train complex neuronal networks with technologies like backpropagation with weight decay, momentum term, manhattan training, flatspot elimination etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Qwen3-Next

    Qwen3-Next

    Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens

    ...The model natively supports a context length of 262K tokens and can be extended up to 1 million tokens using RoPE scaling (YaRN), making it highly capable for processing large documents and extended conversations. Multi-Token Prediction (MTP) boosts both training and inference, while stability optimizations such as weight-decayed and zero-centered layernorm ensure robustness. Benchmarks show it performs comparably to larger models like Qwen3-235B on reasoning, coding, multilingual, and alignment tasks while requiring only a fraction of the training cost.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Mistral Large 3 675B Base 2512

    Mistral Large 3 675B Base 2512

    Frontier-scale 675B multimodal base model for custom AI training

    Mistral Large 3 675B Base 2512 is the foundational, pre-trained version of the Mistral Large 3 family, built as a frontier-scale multimodal Mixture-of-Experts model with 41B active parameters and a total size of 675B. It is trained from scratch using 3000 H200 GPUs, making it one of the most advanced and compute-intensive open-weight models available. As the base version, it is not fine-tuned for instruction following or reasoning, making it ideal for teams planning their own domain-specific finetuning or custom training pipelines. The model is engineered for reliability, long-context comprehension, and stable performance across many enterprise, scientific, and knowledge-intensive workloads. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB