Search Results for "gpu max performance" - Page 8

Showing 388 open source projects for "gpu max performance"

View related business solutions
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 1
    VK-GL-CTS

    VK-GL-CTS

    Khronos Vulkan, OpenGL, and OpenGL ES Conformance Tests

    ...These tests are essential for vendors seeking certification, as they rigorously check the correctness and completeness of driver implementations against standardized behavior. The suite contains thousands of automated tests that assess rendering accuracy, API behavior, memory usage, and performance consistency. It is widely used by GPU vendors and developers to ensure compatibility, stability, and reliability across platforms and hardware.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    MSI Kombustor

    MSI Kombustor

    Advanced OpenGL and Vulkan graphics card stress testing utility

    MSI Kombustor is a dedicated GPU stress-testing and benchmarking tool built on top of the popular FurMark engine. It is designed to push graphics cards to their thermal and stability limits, helping users verify cooling performance and overclocking reliability. With support for advanced 3D APIs like OpenGL and Vulkan, Kombustor can generate demanding rendering workloads that simulate real-world GPU pressure.
    Downloads: 85 This Week
    Last Update:
    See Project
  • 3
    JAX

    JAX

    Composable transformations of Python+NumPy programs

    With its updated version of Autograd, JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order. What’s new is that JAX uses XLA to compile and run your NumPy programs on GPUs and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    MusicGPT

    MusicGPT

    Generate music based on natural language prompts using LLMs

    MusicGPT is an open-source application designed to generate music from natural language prompts using locally executed artificial intelligence models. The software allows users to run advanced music generation systems directly on their own devices without requiring heavy dependencies such as Python or full machine learning frameworks. Instead, it provides a lightweight environment capable of executing music generation models locally on CPUs or GPUs while maintaining strong performance across...
    Downloads: 5 This Week
    Last Update:
    See Project
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 5
    bitnet.cpp

    bitnet.cpp

    Official inference framework for 1-bit LLMs

    bitnet.cpp is the official open-source inference framework and ecosystem designed to enable ultra-efficient execution of 1-bit large language models (LLMs), which quantize most model parameters to ternary values (-1, 0, +1) while maintaining competitive performance with full-precision counterparts. At its core is bitnet.cpp, a highly optimized C++ backend that supports fast, low-memory inference on both CPUs and GPUs, enabling models such as BitNet b1.58 to run without requiring enormous...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Isaac ROS Visual SLAM

    Isaac ROS Visual SLAM

    Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM

    Discover a faster, easier way to build advanced AI robotics applications with the NVIDIA Isaac™ ROS collection of accelerated computing packages and AI models, bringing NVIDIA acceleration to ROS developers everywhere. Isaac ROS Visual SLAM provides a high-performance, best-in-class ROS 2 package for VSLAM (visual simultaneous localization and mapping). This package uses one or more stereo cameras and optionally an IMU to estimate odometry as an input to navigation. It is GPU-accelerated to provide real-time, low-latency results in a robotics application. VSLAM provides an additional odometry source for mobile robots (ground-based) and can be the primary odometry source for drones. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    tt-metal

    tt-metal

    TT-NN operator library, and TT-Metalium low level kernel programming

    tt-metal, also referred to in its documentation as TT-Metalium, is Tenstorrent’s low-level software development kit for programming applications on Tenstorrent AI accelerators. The project is designed for developers who need direct access to the company’s Tensix processor architecture, exposing a programming model that is closer to hardware control than high-level inference frameworks. Instead of following a traditional GPU model centered on massive thread parallelism, the platform is built...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Diffrax

    Diffrax

    Numerical differential equation solvers in JAX

    Diffrax is a numerical differential equation solving library built for the JAX ecosystem, with a strong focus on composability, differentiability, and high-performance scientific computing. The project provides tools for solving ordinary differential equations, stochastic differential equations, controlled differential equations, and related systems in a way that fits naturally into modern machine learning and differentiable programming workflows. Because it is written to work closely with...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Recursive Language Models

    Recursive Language Models

    General plug-and-play inference library for Recursive Language Models

    RLM (short for Reinforcement Learning Models) is a modular framework that makes it easier to build, train, evaluate, and deploy reinforcement learning (RL) agents across a wide range of environments and tasks. It provides a consistent API that abstracts away many of the repetitive engineering patterns in RL research and application work, letting developers focus on modeling, experimentation, and fine-tuning rather than infrastructure plumbing. Within the framework, you can define custom...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Stop Storing Third-Party Tokens in Your Database Icon
    Stop Storing Third-Party Tokens in Your Database

    Auth0 Token Vault handles secure token storage, exchange, and refresh for external providers so you don't have to build it yourself.

    Rolling your own OAuth token storage can be a security liability. Token Vault securely stores access and refresh tokens from federated providers and handles exchange and renewal automatically. Connected accounts, refresh exchange, and privileged worker flows included.
    Try Auth0 for Free
  • 10
    EPLB

    EPLB

    Expert Parallelism Load Balancer

    EPLB is DeepSeek’s open implementation of a load balancing algorithm designed for expert parallelism (EP) settings in MoE architectures. In EP, different “experts” are mapped to different GPUs or nodes, so load imbalance becomes a performance bottleneck if certain experts are invoked much more often. EPLB solves this by duplicating heavily used experts (redundancy) and then placing those duplicates across GPUs to even out computational load. It uses policies like hierarchical load balancing (grouped experts placed at node and then GPU level) and global load balancing depending on configuration. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Core ML Tools

    Core ML Tools

    Core ML tools contain supporting tools for Core ML model conversion

    ...Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to fine-tune models, all on the user’s device. Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Covalent workflow

    Covalent workflow

    Pythonic tool for running machine-learning/high performance workflows

    Covalent is a Pythonic workflow tool for computational scientists, AI/ML software engineers, and anyone who needs to run experiments on limited or expensive computing resources including quantum computers, HPC clusters, GPU arrays, and cloud services. Covalent enables a researcher to run computation tasks on an advanced hardware platform – such as a quantum computer or serverless HPC cluster – using a single line of code. Covalent overcomes computational and operational challenges inherent...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    BentoML

    BentoML

    Unified Model Serving Framework

    ...Parallelize compute-intense model inference workloads to scale separately from the serving logic. Adaptive batching dynamically groups inference requests for optimal performance. Orchestrate distributed inference graph with multiple models via Yatai on Kubernetes. Easily configure CUDA dependencies for running inference with GPU. Automatically generate docker images for production deployment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Numba

    Numba

    NumPy aware dynamic Python compiler using LLVM

    Numba is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code. Numba translates Python functions to optimized machine code at runtime using the industry-standard LLVM compiler library. Numba-compiled numerical algorithms in Python can approach the speeds of C or FORTRAN. You don't need to replace the Python interpreter, run a separate compilation step, or even have a C/C++ compiler installed. Just apply one of the Numba decorators to your...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Qwen-VL

    Qwen-VL

    Chat & pretrained large vision language model

    Qwen-VL is Alibaba Cloud’s vision-language large model family, designed to integrate visual and linguistic modalities. It accepts image inputs (with optional bounding boxes) and text, and produces text (and sometimes bounding boxes) as output. The model variants (VL-Plus, VL-Max, etc.) have been upgraded for better visual reasoning, text recognition from images, fine-grained understanding, and support for high image resolutions / extreme aspect ratios. Qwen-VL supports multilingual inputs...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Diplomacy Cicero

    Diplomacy Cicero

    Code for Cicero, an AI agent that plays the game of Diplomacy

    ...It supports two variants: Cicero (which handles full “press” negotiation) and Diplodocus (a variant focused on no-press diplomacy) as described in the README. The codebase is implemented primarily in Python with performance-critical components in C++ (via pybind11 bindings) and is configured to run in a high‐GPU cluster environment. Configuration is managed via protobuf files to define tasks such as self-play, benchmark agent comparisons, and RL training. The project is now archived and read-only, reflecting that it is no longer actively developed but remains publicly available for research use.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17

    Knema - Frame Continuity Engine

    Knema is a lightweight real-time performance & frame continuity engine

    Knema is an intelligent real-time frame continuity and performance control engine designed to improve game smoothness, not just raw FPS. 🔹 Adaptive Frametime Control Continuously analyzes frametime distribution (mean, p95, jitter) Prioritizes stable frame pacing over artificial FPS boosting Reduces micro-stutter and sudden frame spikes 🔹 GPU-Aware Decision Engine Accurately detects GPU-bound, CPU-bound, and engine-wait scenarios Differentiates real GPU bottlenecks from telemetry glitches Prevents false performance corrections 🔹 Intelligent FPS & Power Management Dynamically adjusts FPS caps based on real hardware limits Reduces unnecessary GPU power consumption in stable scenes Avoids aggressive throttling that causes oscillation or jitter 🔹 Real-Time Probing System Actively tests GPU headroom instead of relying on assumptions Safely probes performance limits without destabilizing gameplay Automatically backs off when physical limits
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    TorchRec

    TorchRec

    Pytorch domain library for recommendation systems

    ...The TorchRec sharder can shard embedding tables with different sharding strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding. The TorchRec planner can automatically generate optimized sharding plans for models. Pipelined training overlaps dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance. Optimized kernels for RecSys powered by FBGEMM. Quantization support for reduced precision training and inference. Common modules for RecSys.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    The SpeechBrain Toolkit

    The SpeechBrain Toolkit

    A PyTorch-based Speech Toolkit

    ...Spectral masking, spectral mapping, and time-domain enhancement are different methods already available within SpeechBrain. Separation methods such as Conv-TasNet, DualPath RNN, and SepFormer are implemented as well. SpeechBrain provides efficient and GPU-friendly speech augmentation pipelines and acoustic features extraction.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Colab-MCP

    Colab-MCP

    An MCP server for interacting with Google Colab

    ...Instead of relying on manual notebook usage, the system allows MCP-compatible agents to execute code, manage files, install dependencies, and orchestrate entire development workflows within Colab’s cloud infrastructure. This approach bridges the gap between local AI agents and remote high-performance compute environments, allowing users to offload heavy workloads such as machine learning training, data analysis, and dependency-heavy tasks to Colab’s GPU and TPU resources. By exposing Colab as an MCP server, the tool enables seamless integration with a wide range of AI assistants and agent frameworks, creating a standardized interface for tool use and execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    TAME LLM

    TAME LLM

    Traditional Mandarin LLMs for Taiwan

    TAME LLM is an open-source initiative focused on building and releasing large language models optimized for Traditional Mandarin and the linguistic context of Taiwan. The project includes models such as Llama-3-Taiwan-70B, which are fine-tuned versions of large transformer architectures trained on extensive corpora containing both Traditional Mandarin and English text. These models are designed to support applications such as conversational AI, knowledge retrieval, and domain-specific...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Profile Data

    Profile Data

    Analyze computation-communication overlap in V3/R1

    profile-data is a repository that publishes profiling traces and metrics from DeepSeek’s training and inference infrastructure (especially during DeepSeek-V3 / R1 experiments). The profiling data targets insights into computation-communication overlap, pipeline scheduling (e.g. DualPipe), and how MoE / EP / parallelism strategies interact in real systems. The repository contains JSON trace files like train.json, prefill.json, decode.json, and associated assets. Users can load them into tools...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code. TE also includes a framework-agnostic C++...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    TensorFlow Model Garden

    TensorFlow Model Garden

    Models and examples built with TensorFlow

    The TensorFlow Model Garden is a repository with a number of different implementations of state-of-the-art (SOTA) models and modeling solutions for TensorFlow users. We aim to demonstrate the best practices for modeling so that TensorFlow users can take full advantage of TensorFlow for their research and product development. To improve the transparency and reproducibility of our models, training logs on TensorBoard.dev are also provided for models to the extent possible though not all models...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend. The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging...
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB