Showing 163 open source projects for "gpu max performance"

View related business solutions
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 8 Monitoring Tools in One APM. Install in 5 Minutes. Icon
    8 Monitoring Tools in One APM. Install in 5 Minutes.

    Errors, performance, logs, uptime, hosts, anomalies, dashboards, and check-ins. One interface.

    AppSignal works out of the box for Ruby, Elixir, Node.js, Python, and more. 30-day free trial, no credit card required.
    Start Free
  • 1
    GPU Hot

    GPU Hot

    Real-time NVIDIA GPU dashboard

    GPU Hot is an open-source, lightweight monitoring dashboard designed to provide real-time visibility into NVIDIA GPU performance across single machines or entire clusters. The project offers a self-hosted web interface that streams hardware metrics directly from GPU servers, enabling developers, ML engineers, and system administrators to observe GPU utilization and system behavior in real time through a browser.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    how-to-optim-algorithm-in-cuda

    how-to-optim-algorithm-in-cuda

    How to optimize some algorithm in cuda

    ...These examples show how different optimization techniques influence performance on modern GPU hardware and allow readers to experiment with real implementations. The repository also contains extensive learning notes that summarize CUDA programming concepts, GPU architecture details, and performance engineering strategies.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    CatBoost

    CatBoost

    High-performance library for gradient boosting on decision trees

    CatBoost is a fast, high-performance open source library for gradient boosting on decision trees. It is a machine learning method with plenty of applications, including ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. CatBoost offers superior performance over other GBDT libraries on many datasets, and has several superb features.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 4
    llmfit

    llmfit

    157 models, 30 providers, one command to find what runs on hardware

    llmfit is a terminal-based utility that helps developers determine which large language models can realistically run on their local hardware by analyzing system resources and model requirements. The tool automatically detects CPU, RAM, GPU, and VRAM specifications, then ranks available models based on performance factors such as speed, quality, and memory fit. It provides both an interactive terminal user interface and a traditional CLI mode, enabling flexible workflows for different user preferences. llmfit also supports advanced configurations including multi-GPU setups, mixture-of-experts architectures, and dynamic quantization recommendations. ...
    Downloads: 20 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    OpenLIT

    OpenLIT

    OpenLIT is an open-source LLM Observability tool

    OpenLIT is an OpenTelemetry-native tool designed to help developers gain insights into the performance of their LLM applications in production. It automatically collects LLM input and output metadata and monitors GPU performance for self-hosted LLMs. OpenLIT makes integrating observability into GenAI projects effortless with just a single line of code. Whether you're working with popular LLM providers such as OpenAI and HuggingFace, or leveraging vector databases like ChromaDB, OpenLIT ensures your applications are monitored seamlessly, providing critical insights including GPU performance stats for self-hosted LLMs to improve performance and reliability. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 6
    Beta9

    Beta9

    Run serverless GPU workloads with fast cold starts on bare-metal

    beta9 is a platform that enables running serverless GPU workloads with fast cold starts on bare-metal servers globally. It allows developers to deploy and scale GPU-accelerated applications without managing underlying infrastructure, offering flexibility and efficiency for AI and high-performance computing tasks. beta9 supports various frameworks and provides tools for monitoring and managing deployments effectively.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    FlashAttention

    FlashAttention

    Fast and memory-efficient exact attention

    FlashAttention is a high-performance deep learning optimization library that reimplements the attention mechanism used in transformer models to be significantly faster and more memory-efficient than standard implementations. It achieves this by using IO-aware algorithms that minimize memory reads and writes, reducing the quadratic memory overhead typically associated with attention operations.
    Downloads: 70 This Week
    Last Update:
    See Project
  • 8
    PowerInfer

    PowerInfer

    High-speed Large Language Model Serving for Local Deployment

    PowerInfer is a high-performance inference engine designed to run large language models efficiently on personal computers equipped with consumer-grade GPUs. The project focuses on improving the performance of local AI inference by optimizing how neural network computations are distributed between CPU and GPU resources. Its architecture exploits the observation that only a subset of neurons in large models are frequently activated, allowing the system to preload frequently used neurons into GPU memory while processing less common activations on the CPU. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Flash-MoE

    Flash-MoE

    Running a big model on a small laptop

    ...It likely includes support for GPU acceleration and parallel processing, enabling it to handle large-scale workloads effectively. The architecture emphasizes speed and efficiency, making it suitable for both research and production environments where performance is critical. It may also provide tools for benchmarking and tuning model behavior. Overall, flash-moe represents a technical advancement in making MoE models more practical and deployable.
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 10
    uzu

    uzu

    A high-performance inference engine for AI models

    ...The engine implements a hybrid architecture in which model layers can be executed either as custom GPU kernels or through Apple’s MPSGraph API, allowing it to balance performance and compatibility depending on the workload. By utilizing Apple’s unified memory architecture, uzu reduces memory copying overhead and improves inference throughput for local AI workloads. The system includes a simple high-level API that enables developers to run models, create inference sessions, and generate outputs with minimal configuration.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    autoresearch-macos

    autoresearch-macos

    AI agents running research on single-GPU nanochat training

    autoresearch-macos is a macOS-focused adaptation of autonomous research loop systems inspired by the autoresearch paradigm, enabling AI agents to iteratively improve machine learning models through self-directed experimentation. The system follows a structured loop in which an agent modifies a training script, executes a fixed-duration experiment, evaluates performance metrics, and decides whether to keep or revert changes. It is designed to operate efficiently within macOS environments, making it accessible for developers working outside traditional high-performance GPU clusters. The project typically includes components such as data preparation scripts, a training loop, and an instruction file that guides the agent’s behavior. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    NVIDIA cuOpt

    NVIDIA cuOpt

    GPU accelerated decision optimization

    ...The platform provides multiple interfaces, including C, Python, and server APIs, allowing developers to integrate optimization capabilities into applications and services. cuOpt is designed for high-performance environments and can be deployed across cloud, hybrid, or on-premise infrastructures. By combining GPU acceleration with scalable APIs, cuOpt enables organizations to solve large optimization challenges in logistics, operations research, and decision-making systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Flux.jl

    Flux.jl

    Relax! Flux is the ML library that doesn't make you tensor

    Flux is an elegant approach to machine learning. It's a 100% pure Julia stack and provides lightweight abstractions on top of Julia's native GPU and AD support. Flux makes the easy things easy while remaining fully hackable. Flux provides a single, intuitive way to define models, just like mathematical notation. Julia transparently compiles your code, optimizing and fusing kernels for the GPU, for the best performance. Existing Julia libraries are differentiable and can be incorporated directly into Flux models. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    LTX-2

    LTX-2

    Python inference and LoRA trainer package for the LTX-2 audio–video

    LTX-2 is a powerful, open-source toolkit developed by Lightricks that provides a modular, high-performance base for building real-time graphics and visual effects applications. It is architected to give developers low-level control over rendering pipelines, GPU resource management, shader orchestration, and cross-platform abstractions so they can craft visually compelling experiences without starting from scratch. Beyond basic rendering scaffolding, LTX-2 includes optimized math libraries, resource loaders, utilities for texture and buffer handling, and integration points for native event loops and input systems. ...
    Downloads: 41 This Week
    Last Update:
    See Project
  • 15
    GPUStack

    GPUStack

    Performance-optimized AI inference on your GPUs

    GPUStack is an open-source GPU cluster management platform designed to simplify the deployment and operation of artificial intelligence models across heterogeneous hardware environments. The system aggregates GPU resources from multiple machines into a unified cluster so developers and administrators can run large language models and other AI workloads efficiently across distributed infrastructure. Instead of requiring complex orchestration systems such as Kubernetes, GPUStack provides a...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    OpenFang

    OpenFang

    Open-source Agent Operating System

    OpenFang is an open-source agent operating system designed to orchestrate autonomous AI agents and workflows in a structured, production-oriented environment. Written primarily in Rust, the project focuses on building a high-performance runtime where multiple specialized agents can collaborate to complete complex computational or development tasks. It aims to move beyond simple chat-based agents by providing infrastructure for persistent agent memory, task coordination, and scalable execution. The system is positioned as a foundation for building advanced AI tooling, particularly in environments that require tight integration with GPU workflows and modern AI pipelines. ...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 17
    Modular Platform

    Modular Platform

    The Modular Platform (includes MAX & Mojo)

    Modular is a high-performance AI infrastructure company repository focused on building next-generation compute and software tools for machine learning workloads. The project centers on enabling developers to run AI models faster and more efficiently by rethinking the traditional ML software stack. It is closely associated with the Mojo programming language and related tooling that aims to combine Python usability with systems-level performance. Modular’s ecosystem is designed to simplify...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Zed

    Zed

    High-performance, multiplayer code editor from the creators of Atom

    Zed is a next-generation code editor designed for high-performance collaboration with humans and AI. Written from scratch in Rust to efficiently leverage multiple CPU cores and your GPU. Integrate upcoming LLMs into your workflow to generate, transform, and analyze code. Chat with teammates, write notes together, and share your screen and project. Multibuffers compose excerpts from across the codebase in one editable surface.
    Downloads: 35 This Week
    Last Update:
    See Project
  • 19
    HeavyDB

    HeavyDB

    HeavyDB (formerly MapD/OmniSciDB)

    HeavyDB is an open-source GPU-accelerated analytical database designed to perform extremely fast queries on large datasets. The system is built as a SQL-based relational columnar database engine that leverages modern hardware parallelism, including GPUs and multicore CPUs. Its architecture allows users to query datasets containing billions of rows in milliseconds without requiring traditional indexing, pre-aggregation, or sampling techniques. HeavyDB was originally developed as part of the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    LMCache

    LMCache

    Supercharge Your LLM with the Fastest KV Cache Layer

    ...These capabilities aim to lower latency, cut GPU cycles, and stabilize performance for production workloads with overlapping prompts or retrieval-augmented contexts. The end result is a cache fabric for LLMs that complements engines rather than replacing them.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    FLUX.2-klein-4B

    FLUX.2-klein-4B

    Flux 2 image generation model pure C inference

    ...Because the implementation is in plain C and focuses on data locality and vectorized operations, flux2.c can be integrated into performance-critical code paths where control over memory layout and execution behavior matters, such as GPU kernels, embedded systems, or custom ML runtime engines.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 22
    UCCL

    UCCL

    UCCL is an efficient communication library for GPUs

    UCCL is a high-performance GPU communication library designed to support distributed machine learning workloads and large-scale AI systems. The library focuses on enabling efficient data transfer and collective communication between GPUs during training and inference processes. It supports a variety of communication patterns including collective operations such as all-reduce as well as peer-to-peer transfers that are commonly used in modern machine learning architectures. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    KVCache-Factory

    KVCache-Factory

    Unified KV Cache Compression Methods for Auto-Regressive Models

    ...In large language models, the key-value cache stores intermediate attention states that enable efficient token generation during inference, but these caches can consume large amounts of GPU memory when handling long contexts. KVCache-Factory provides a platform for implementing and evaluating multiple compression strategies that reduce memory usage while preserving model performance. The framework integrates several state-of-the-art methods such as PyramidKV, SnapKV, H2O, and StreamingLLM, allowing researchers to compare and experiment with different approaches within the same environment. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Text Generation Inference

    Text Generation Inference

    Large Language Model Text Generation Inference

    Text Generation Inference is a high-performance inference server for text generation models, optimized for Hugging Face's Transformers. It is designed to serve large language models efficiently with optimizations for performance and scalability.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    Insanely Fast Whisper

    Insanely Fast Whisper

    An opinionated CLI to transcribe Audio files w/ Whisper on-device

    Insanely Fast Whisper is a high-performance command-line tool designed to dramatically accelerate speech-to-text transcription using OpenAI’s Whisper models on local hardware. It leverages modern optimizations such as batch processing, mixed precision, and advanced attention mechanisms like Flash Attention to significantly reduce inference time while maintaining high transcription accuracy.
    Downloads: 6 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next
MongoDB Logo MongoDB