Search Results for "weight training workout"

16 projects for "weight training workout" with 1 filter applied:

  • Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    More flexibility. More control.

    Generate interest, access liquidity without selling, and execute trades seamlessly. All in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 1
    FitTrackee

    FitTrackee

    Self-hosted outdoor activity tracker

    ...Instead of locking users into proprietary ecosystems or paid plans, FitTrackee lets you keep your own data on your server, giving full control over privacy and longevity of your fitness history. The interface is designed to be flexible enough for everyday gym routines, home workouts, and personalized training plans, supporting a variety of exercise types and custom attributes for different training styles.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 2
    DeepSeek V2

    DeepSeek V2

    Strong, Economical, and Efficient Mixture-of-Experts Language Model

    DeepSeek-V2 is the second major iteration of DeepSeek’s foundation language model (LLM) series. This version likely includes architectural improvements, training enhancements, and expanded dataset coverage compared to V1. The repository includes model weight artifacts, evaluation benchmarks across a broad suite (e.g. reasoning, math, multilingual), configuration files, and possibly tokenization / inference scripts. The V2 model is expected to support more advanced features like better context window handling, more efficient inference, better performance on challenging tasks, and stronger alignment with human feedback. ...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 3
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is presented as the world’s first open-weight, large-scale hybrid-attention reasoning model, designed to push the frontier of long-context, tool-using, and deeply “thinking” language models. It is built on the MiniMax-Text-01 foundation and keeps the same massive parameter budget, but reworks the attention and training setup for better reasoning and test-time compute scaling.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    MatMul-Free LM

    MatMul-Free LM

    Implementation for MatMul-free LM

    MatMul-Free LM is an experimental implementation of a large language model architecture designed to eliminate traditional matrix multiplication operations used in transformer networks. Since matrix multiplication is one of the most computationally expensive components of modern language models, the project explores alternative computational strategies that reduce hardware requirements while maintaining comparable performance. The architecture relies on quantization-aware training and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    BitNet

    BitNet

    BitNet: Scaling 1-bit Transformers for Large Language Models

    BitNet is a machine learning research implementation that explores extremely low-precision neural network architectures designed to dramatically reduce the computational cost of large language models. The project implements the BitNet architecture described in research on scaling transformer models using extremely low-bit quantization techniques. In this approach, neural network weights are quantized to approximately one bit per parameter, allowing models to operate with far lower memory...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    Attention Residuals (AttnRes)

    Attention Residuals (AttnRes)

    Drop-in replacement for standard residual connections in Transformers

    Attention Residuals is a research-driven architectural innovation for transformer-based models that replaces traditional residual connections with an attention-based mechanism to improve information flow across layers. In standard transformers, residual connections simply sum outputs from previous layers, which can lead to uncontrolled growth of hidden states and dilution of early-layer information in deep networks. Attention Residuals introduces a learnable softmax attention mechanism that...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    LLM-Pruner

    LLM-Pruner

    On the Structural Pruning of Large Language Models

    ...The framework relies on gradient-based analysis to determine which parameters contribute least to model performance, enabling targeted structural pruning rather than simple weight removal. After pruning, the framework applies lightweight fine-tuning methods such as LoRA to recover performance using relatively small datasets and short training times.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    UCCL

    UCCL

    UCCL is an efficient communication library for GPUs

    ...The system also supports specialized workloads such as reinforcement learning weight transfers, key-value cache sharing, and expert parallelism for mixture-of-experts models. Its architecture emphasizes flexibility and extensibility so that developers can implement custom communication protocols tailored to specific machine learning workloads.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    fvcore

    fvcore

    Collection of common code shared among different research projects

    fvcore is a lightweight utility library that factors out common performance-minded components used across Facebook/Meta computer-vision codebases. It provides numerics and loss layers (e.g., focal loss, smooth-L1, IoU/GIoU) implemented for speed and clarity, along with initialization helpers and normalization layers for building PyTorch models. Its common modules include timers, logging, checkpoints, registry patterns, and configuration helpers that reduce boilerplate in research code. A...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build, govern, and optimize agents and models with Gemini Enterprise Agent Platform.
    Start Free
  • 10

    DE-HEoC

    DE-based Weight Optimisation for Heterogeneous Ensemble

    We propose the use of Differential Evolution algorithm for the weight adjustment of base classifiers used in weighted voting heterogeneous ensemble of classifier. Average Matthews Correlation Coefficient (MCC) score, calculated over 10-fold cross-validation, has been used as the measure of quality of an ensemble. DE/rand/1/bin algorithm has been utilised to maximize the average MCC score calculated using 10-fold cross-validation on training dataset.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    The project purpose is to develop an application to store and analyze workout data. Workout data in terms of duration of a training session, average pulse during the session and so forth. The focus will be on endurance training.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    NeuroBox is an .NET OOP Library to generate, propagate and train complex neuronal networks with technologies like backpropagation with weight decay, momentum term, manhattan training, flatspot elimination etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    A cross-platform (eventually) application to record a triathlete's workouts, meals/weight, equipment, performance, events, targets etc... It can record running, swimming, cycling and will be extended to record strength training in the future. Requires
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Dream Box LMS is a light weight learning management system focused toward training departments of small to mid-size corporations. DBLMS is developed using the Java programming language and adheres to MVC style of programming.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Qwen3-Next

    Qwen3-Next

    Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens

    ...The model natively supports a context length of 262K tokens and can be extended up to 1 million tokens using RoPE scaling (YaRN), making it highly capable for processing large documents and extended conversations. Multi-Token Prediction (MTP) boosts both training and inference, while stability optimizations such as weight-decayed and zero-centered layernorm ensure robustness. Benchmarks show it performs comparably to larger models like Qwen3-235B on reasoning, coding, multilingual, and alignment tasks while requiring only a fraction of the training cost.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Mistral Large 3 675B Base 2512

    Mistral Large 3 675B Base 2512

    Frontier-scale 675B multimodal base model for custom AI training

    Mistral Large 3 675B Base 2512 is the foundational, pre-trained version of the Mistral Large 3 family, built as a frontier-scale multimodal Mixture-of-Experts model with 41B active parameters and a total size of 675B. It is trained from scratch using 3000 H200 GPUs, making it one of the most advanced and compute-intensive open-weight models available. As the base version, it is not fine-tuned for instruction following or reasoning, making it ideal for teams planning their own domain-specific finetuning or custom training pipelines. The model is engineered for reliability, long-context comprehension, and stable performance across many enterprise, scientific, and knowledge-intensive workloads. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB