Search Results for "gpu max performance" - Page 8

Showing 457 open source projects for "gpu max performance"

View related business solutions
  • Stop Storing Third-Party Tokens in Your Database Icon
    Stop Storing Third-Party Tokens in Your Database

    Auth0 Token Vault handles secure token storage, exchange, and refresh for external providers so you don't have to build it yourself.

    Rolling your own OAuth token storage can be a security liability. Token Vault securely stores access and refresh tokens from federated providers and handles exchange and renewal automatically. Connected accounts, refresh exchange, and privileged worker flows included.
    Try Auth0 for Free
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build, govern, and optimize agents and models with Gemini Enterprise Agent Platform.
    Start Free
  • 1
    Mooncake

    Mooncake

    Mooncake is the serving platform for Kimi

    Mooncake is an open-source infrastructure platform designed to optimize large language model serving by focusing on efficient management and transfer of model data and KV cache. The platform was originally developed as part of the serving infrastructure for the Kimi large language model system. Its architecture centers on a high-performance transfer engine that provides unified data transfer across different storage and networking technologies. This engine enables efficient movement of tensors and model data across heterogeneous environments such as GPU memory, system memory, and distributed storage systems. Mooncake also introduces distributed key-value cache storage that allows inference systems to reuse previously computed attention states, significantly improving throughput in large-scale deployments. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    oneDNN

    oneDNN

    oneAPI Deep Neural Network Library (oneDNN)

    This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) and Deep Neural Network Library (DNNL). oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. oneDNN is part of oneAPI. The library is optimized for Intel(R) Architecture Processors, Intel Processor Graphics and Xe Architecture graphics. oneDNN has experimental support for the following architectures: Arm* 64-bit Architecture (AArch64), NVIDIA* GPU, OpenPOWER* Power ISA (PPC64), IBMz* (s390x), and RISC-V. oneDNN is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    TensorFlow Serving

    TensorFlow Serving

    Serving system for machine learning models

    TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It deals with the inference aspect of machine learning, taking models after training and managing their lifetimes, providing clients with versioned access via a high-performance, reference-counted lookup table. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data. The...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Stable Diffusion Version 2

    Stable Diffusion Version 2

    High-Resolution Image Synthesis with Latent Diffusion Models

    Stable Diffusion (the stablediffusion repo by Stability-AI) is an open-source implementation and reference codebase for high-resolution latent diffusion image models that power many text-to-image systems. The repository provides code for training and running Stable Diffusion-style models, instructions for installing dependencies (with notes about performance libraries like xformers), and guidance on hardware/driver requirements for efficient GPU inference and training. It’s organized as a practical, developer-focused toolkit: model code, scripts for inference, and examples for using memory-efficient attention and related optimizations are included so researchers and engineers can run or adapt the model for their own projects. ...
    Downloads: 19 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    ort

    ort

    Fast ML inference & training for ONNX models in Rust

    ort is a high-performance Rust library that provides bindings to ONNX Runtime, enabling developers to run machine learning inference and training workflows directly within Rust applications using the standardized ONNX model format. It is designed to bridge the gap between modern machine learning frameworks and systems programming by offering a safe, ergonomic API for executing models originally built in ecosystems like PyTorch, TensorFlow, or scikit-learn. The library emphasizes speed and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    RLHF-Reward-Modeling

    RLHF-Reward-Modeling

    Recipes to train reward model for RLHF

    RLHF-Reward-Modeling is an open-source research framework focused on training reward models used in reinforcement learning from human feedback for large language models. In RLHF pipelines, reward models are responsible for evaluating generated responses and assigning scores that guide the model toward outputs that better match human preferences. The repository provides training recipes and implementations for building reward and preference models using modern machine learning frameworks. It...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    MaxText

    MaxText

    A simple, performant and scalable Jax LLM

    MaxText is a high-performance, highly scalable open-source framework designed to train and fine-tune large language models using the JAX ecosystem. The project acts as both a reference implementation and a practical training library that demonstrates best practices for building and scaling transformer-based language models on modern accelerator hardware. It is optimized to run efficiently on Google Cloud TPUs and GPUs, enabling researchers and engineers to train models ranging from small...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Metal.jl

    Metal.jl

    Metal programming in Julia

    With Metal.jl it's possible to program GPUs on macOS using the Metal programming framework. The package is a work in progress. There are bugs, functionality is missing, and performance hasn't been optimized. Expect to have to make changes to this package if you want to use it. PRs are very welcome. These requirements are fairly strict, and are due to our limited development resources (manpower, hardware). Technically, they can be relaxed. If you are interested in contributing to this, see...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Bend

    Bend

    A massively parallel, high-level programming language

    Bend is an interactive programming environment (REPL) built on top of the Kotlin language, designed to allow users to explore, experiment, and learn Kotlin in a live, feedback-driven manner. The tool lets you define variables, functions, or values at the prompt and iteratively refine them—immediately seeing output and types—while preserving state across commands. It emphasizes discoverability and experimentation: users can inspect functions, call them on sample inputs, and evolve logic...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Application Monitoring That Won't Slow Your App Down Icon
    Application Monitoring That Won't Slow Your App Down

    AppSignal's Rust-based agent is lightweight and stable. Already running in thousands of production apps.

    Full APM with errors, performance, logs, and uptime monitoring. 99.999% uptime SLA on the platform itself.
    Start Free
  • 10
    Makie

    Makie

    Interactive data visualizations and plotting in Julia

    Makie is an interactive data visualization and plotting ecosystem for the Julia programming language, available on Windows, Linux, and Mac. The backend packages GLMakie, WGLMakie, CairoMakie and RPRMakie add different functionalities: You can use Makie to interactively explore your data and create simple GUIs in native Windows or web browsers, export high-quality vector graphics or even raytrace with physically accurate lighting. Choose one or more backend packages: GLMakie (interactive...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 11
    autoresearch for AMD

    autoresearch for AMD

    AI agents running research on single-GPU nanochat training

    autoresearch for AMD is a framework for autonomous scientific experimentation in machine learning, enabling AI agents to iteratively improve models through a continuous loop of hypothesis generation, experimentation, and evaluation. The system is built around a minimal structure that includes a data preparation module, a training script that can be modified, and a program specification that guides the agent’s decision-making process. During each iteration, the agent edits the training code,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    TensorRT LLM

    TensorRT LLM

    TensorRT LLM provides users with an easy-to-use Python API

    TensorRT-LLM is an open-source high-performance inference library specifically designed to optimize and accelerate large language model deployment on NVIDIA GPUs. It provides a Python-based API built on top of PyTorch that allows developers to define, customize, and deploy LLMs efficiently across a variety of hardware configurations, from single GPUs to large multi-node clusters. The library focuses on maximizing throughput and minimizing latency through advanced techniques such as...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    Selkies-GStreamer

    Selkies-GStreamer

    Open-Source Low-Latency Accelerated Linux WebRTC HTML5 Remote Desktop

    selkies-gstreamer is a GStreamer-based media streaming component used in the Selkies project, a cloud-native platform designed for interactive desktop and application streaming. This module acts as a high-performance media pipeline that captures video, encodes it with low latency, and streams it via WebRTC to client browsers. It is optimized for GPU-accelerated encoding and integrates with Kubernetes-based deployments to enable scalable, real-time remote desktop sessions. This component plays a critical role in delivering smooth, responsive experiences for cloud-based workstations, gaming, or visualization tools.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    TIGRE

    TIGRE

    TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox

    TIGRE is an open-source toolbox for fast and accurate 3D tomographic reconstruction for any geometry. Its focus is on iterative algorithms for improved image quality that have all been optimized to run on GPUs (including multi-GPUs) for improved speed. It combines the higher-level abstraction of MATLAB or Python with the performance of CUDA at a lower level in order to make it both fast and easy to use. TIGRE is free to download and distribute: use it, modify it, add to it, and share it. Our...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    VK-GL-CTS

    VK-GL-CTS

    Khronos Vulkan, OpenGL, and OpenGL ES Conformance Tests

    ...These tests are essential for vendors seeking certification, as they rigorously check the correctness and completeness of driver implementations against standardized behavior. The suite contains thousands of automated tests that assess rendering accuracy, API behavior, memory usage, and performance consistency. It is widely used by GPU vendors and developers to ensure compatibility, stability, and reliability across platforms and hardware.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    MusicGPT

    MusicGPT

    Generate music based on natural language prompts using LLMs

    MusicGPT is an open-source application designed to generate music from natural language prompts using locally executed artificial intelligence models. The software allows users to run advanced music generation systems directly on their own devices without requiring heavy dependencies such as Python or full machine learning frameworks. Instead, it provides a lightweight environment capable of executing music generation models locally on CPUs or GPUs while maintaining strong performance across...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 17
    bitnet.cpp

    bitnet.cpp

    Official inference framework for 1-bit LLMs

    bitnet.cpp is the official open-source inference framework and ecosystem designed to enable ultra-efficient execution of 1-bit large language models (LLMs), which quantize most model parameters to ternary values (-1, 0, +1) while maintaining competitive performance with full-precision counterparts. At its core is bitnet.cpp, a highly optimized C++ backend that supports fast, low-memory inference on both CPUs and GPUs, enabling models such as BitNet b1.58 to run without requiring enormous...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    MiniMax-M2

    MiniMax-M2

    MiniMax-M2, a model built for Max coding & agentic workflows

    MiniMax-M2 is an open-weight large language model designed specifically for high-end coding and agentic workflows while staying compact and efficient. It uses a Mixture-of-Experts (MoE) architecture with 230 billion total parameters but only 10 billion activated per token, giving it the behavior of a very large model at a fraction of the runtime cost. The model is tuned for end-to-end developer flows such as multi-file edits, compile–run–fix loops, and test-validated repairs across real...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    SageAttention

    SageAttention

    NeurIPS2025 Spotlight] Quantized Attention

    SageAttention is an open-source optimization library designed to accelerate the attention mechanism used in transformer-based neural networks. Since attention operations are often the most computationally expensive component of modern AI models, SageAttention introduces quantization techniques that significantly reduce computational overhead while preserving model accuracy. The system achieves this by using low-precision numerical formats such as INT4, FP8, or INT8 to represent key matrices...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    BentoML

    BentoML

    Unified Model Serving Framework

    ...Parallelize compute-intense model inference workloads to scale separately from the serving logic. Adaptive batching dynamically groups inference requests for optimal performance. Orchestrate distributed inference graph with multiple models via Yatai on Kubernetes. Easily configure CUDA dependencies for running inference with GPU. Automatically generate docker images for production deployment.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    FFmpegAndroid

    FFmpegAndroid

    FFmpeg implements video cropping, watermarking, transcoding

    FFmpegAndroid is a comprehensive Android-focused multimedia development project that demonstrates how to integrate and use FFmpeg for advanced audio and video processing tasks. It provides a wide range of implementations including video editing, transcoding, watermarking, and GIF generation, all optimized for mobile environments. The project also covers real-time streaming capabilities such as local and live RTMP pushing using H264 encoding, making it suitable for building live broadcasting...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    JiT

    JiT

    PyTorch implementation of JiT

    JiT is an open-source PyTorch implementation of a state-of-the-art image diffusion model designed around a minimalist yet powerful architecture for pixel-level generative modeling, based on the paper Back to Basics: Let Denoising Generative Models Denoise. Rather than predicting noise, JiT models directly predict clean image data, which the research suggests aligns better with the manifold structure of natural images and leads to stronger generative performance at high resolution. This...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Ghostfolio

    Ghostfolio

    Open Source Wealth Management Software. Angular + NestJS + Prisma

    Ghostfolio is an open-source wealth management software built with web technology. The application empowers busy people to keep track of stocks, ETFs or cryptocurrencies and make solid, data-driven investment decisions. The software is designed for personal use in continuous operation. Ghostfolio is a privacy-first, open-source dashboard for your personal finances. Break down your asset allocation, know your net worth and make solid data-driven investment decisions. Ghostfolio empowers busy...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    JAX

    JAX

    Composable transformations of Python+NumPy programs

    With its updated version of Autograd, JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order. What’s new is that JAX uses XLA to compile and run your NumPy programs on GPUs and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    tt-metal

    tt-metal

    TT-NN operator library, and TT-Metalium low level kernel programming

    tt-metal, also referred to in its documentation as TT-Metalium, is Tenstorrent’s low-level software development kit for programming applications on Tenstorrent AI accelerators. The project is designed for developers who need direct access to the company’s Tensix processor architecture, exposing a programming model that is closer to hardware control than high-level inference frameworks. Instead of following a traditional GPU model centered on massive thread parallelism, the platform is built...
    Downloads: 2 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB