Showing 240 open source projects for "gpu hardware"

View related business solutions
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 1
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend. The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Codon

    Codon

    A high-performance, zero-overhead, extensible Python compiler

    Codon is a high-performance Python compiler that compiles Python code to native machine code without any runtime overhead. Typical speedups over Python are on the order of 100x or more, on a single thread. Codon supports native multithreading which can lead to speedups many times higher still. The Codon framework is fully modular and extensible, allowing for the seamless integration of new modules, compiler optimizations, domain-specific languages and so on. We actively develop Codon...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    ArrayFire

    ArrayFire

    ArrayFire, a general purpose GPU library

    ArrayFire is a general-purpose tensor library that simplifies the process of software development for the parallel architectures found in CPUs, GPUs, and other hardware acceleration devices. The library serves users in every technical computing market. Data structures in ArrayFire are smartly managed to avoid costly memory transfers and to take advantage of each performance feature provided by the underlying hardware. The community of ArrayFire developers invites you to build with us if...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Ollama Telegram Bot

    Ollama Telegram Bot

    Ollama Telegram bot, with advanced configuration

    ...It includes access control features such as user whitelists and admin roles, allowing fine-grained control over who can interact with the bot and manage its behavior. The bot connects to a local or remote Ollama server, enabling users to run models on their own hardware while maintaining full privacy. It supports Docker-based deployment, making it easy to set up alongside an Ollama instance with optional GPU acceleration. Configuration is handled through environment variables, allowing customization of models, timeouts, and interaction rules. Overall, ollama-telegram provides a lightweight and extensible solution for deploying personal or team-based AI assistants.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 5
    shimmy

    shimmy

    Python-free Rust inference server

    ...It supports modern model formats such as GGUF and SafeTensors and can automatically discover models stored locally or in common directories used by other AI tools. Advanced capabilities include CPU offloading for Mixture-of-Experts models and GPU acceleration, enabling large models to run on consumer hardware with limited VRAM.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    qvac-fabric-llm.cpp

    qvac-fabric-llm.cpp

    QVAC Fabric: cross-platform LLM inference and fine-tuning

    qvac-fabric-llm.cpp is a cross-platform large language model inference and fine-tuning engine built as an advanced fork of llama.cpp, designed to run efficiently across desktops, mobile devices, and heterogeneous GPU environments. The project focuses on removing hardware limitations traditionally associated with LLM deployment by enabling support for a wide range of backends, including Vulkan, Metal, CUDA, and CPU, making it accessible on devices ranging from smartphones to enterprise servers. It introduces native LoRA fine-tuning capabilities that can be executed directly on consumer hardware, allowing developers to train and adapt models locally without relying on cloud infrastructure. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Wan2.1

    Wan2.1

    Wan2.1: Open and Advanced Large-Scale Video Generative Model

    ...Wan2.1 focuses on efficient video synthesis while maintaining rich semantic and aesthetic detail, enabling applications in content creation, entertainment, and research. The model supports text-to-video and image-to-video generation tasks with flexible resolution options suitable for various GPU hardware configurations. Wan2.1’s architecture balances generation quality and inference cost, paving the way for later improvements seen in Wan2.2 such as Mixture-of-Experts and enhanced aesthetics. It was trained on large-scale video and image datasets, providing generalization across diverse scenes and motion patterns.
    Downloads: 93 This Week
    Last Update:
    See Project
  • 8
    Lenovo Legion Linux Support

    Lenovo Legion Linux Support

    Driver and tools for controlling Lenovo Legion laptops in Linux

    Lenovo Legion Linux (LLL) brings additional drivers and tools for Lenovo Legion series laptops to Linux. It is the alternative to Lenovo Vantage or Legion Zone (both Windows only). It allows you to control features like the fan curve, power mode, power limits, rapid charging, and more. This has been achieved through reverse engineering and disassembling the ACPI firmware, as well as the firmware and memory of the embedded controller (EC).
    Downloads: 14 This Week
    Last Update:
    See Project
  • 9
    ANE Training

    ANE Training

    Training neural networks on Apple Neural Engine via APIs

    ...It is primarily intended as a research and educational proof of concept rather than a production library, highlighting what is technically possible with undocumented hardware access.
    Downloads: 0 This Week
    Last Update:
    See Project
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • 10
    Bailing

    Bailing

    Bailing is a voice dialogue robot similar to GPT-4o

    ...The project is modular: each core function — ASR, VAD, LLM, TTS — exists as a separately replaceable component, which allows flexibility in picking your preferred models depending on resources or languages. It aims to be light enough to run without a GPU, making it usable on modest hardware or edge devices, while still maintaining low latency and smooth interaction. Bailing includes a memory system, giving the assistant the ability to remember user preferences and context across sessions, which enables more personalized and context-aware conversations.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    FlashMLA

    FlashMLA

    FlashMLA: Efficient Multi-head Latent Attention Kernels

    ...On very compute-bound settings, it can reach up to ~660 TFLOPS on H800 SXM5 hardware, while in memory-bound configurations it can push memory throughput to ~3000 GB/s. The team regularly updates it with performance improvements; for example, a 2025 update claims 5 % to 15 % gains on compute-bound workloads while maintaining API compatibility.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Intel Extension for PyTorch

    Intel Extension for PyTorch

    A Python package for extending the official PyTorch

    Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    CogVideo

    CogVideo

    Text and image to video generation: CogVideoX and CogVideo

    CogVideo is an open-source family of advanced video generation models that can create videos from text, images, or existing video inputs. Built on large-scale Transformer and diffusion architectures, it enables multimodal generation across text-to-video, image-to-video, and video continuation tasks. The latest CogVideoX models offer higher resolution outputs, longer video durations, and improved controllability through prompt engineering. The project includes tools for inference,...
    Downloads: 26 This Week
    Last Update:
    See Project
  • 14
    Apple Silicon Guide

    Apple Silicon Guide

    Learn all about the A17 Pro, A16 Bionic, R1, M1-series

    ...For creative and power users, there are performance recommendations for video/audio production, machine learning workflows, and GPU-accelerated tasks.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    VK-GL-CTS

    VK-GL-CTS

    Khronos Vulkan, OpenGL, and OpenGL ES Conformance Tests

    ...These tests are essential for vendors seeking certification, as they rigorously check the correctness and completeness of driver implementations against standardized behavior. The suite contains thousands of automated tests that assess rendering accuracy, API behavior, memory usage, and performance consistency. It is widely used by GPU vendors and developers to ensure compatibility, stability, and reliability across platforms and hardware.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 16
    SimpleLLM

    SimpleLLM

    950 line, minimal, extensible LLM inference engine built from scratch

    SimpleLLM is a minimal, extensible large language model inference engine implemented in roughly 950 lines of code, built from scratch to serve both as a learning tool and a research platform for novel inference techniques. It provides the core components of an LLM runtime—such as tokenization, batching, and asynchronous execution—without the abstraction overhead of more complex engines, making it easier for developers and researchers to understand and modify. Designed to run efficiently on...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    Metal.jl

    Metal.jl

    Metal programming in Julia

    With Metal.jl it's possible to program GPUs on macOS using the Metal programming framework. The package is a work in progress. There are bugs, functionality is missing, and performance hasn't been optimized. Expect to have to make changes to this package if you want to use it. PRs are very welcome. These requirements are fairly strict, and are due to our limited development resources (manpower, hardware). Technically, they can be relaxed. If you are interested in contributing to this, see...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    FurMark

    FurMark

    GPU stress test OpenGL and Vulkan graphics benchmark Windows/Linux

    FurMark is an intensive benchmarking tool designed to evaluate the performance of graphics cards using fur rendering algorithms. This tool is particularly effective in generating high workloads that can significantly increase the temperature of the GPU, making it a useful utility for testing the stability and stress tolerance of graphics cards. By simulating demanding rendering tasks, FurMark serves as a comprehensive test for assessing the robustness and thermal performance of GPUs under...
    Downloads: 327 This Week
    Last Update:
    See Project
  • 19
    Lemonade

    Lemonade

    Lemonade helps users run local LLMs with the highest performance

    Lemonade is a local LLM runtime that aims to deliver the highest possible performance on your own hardware by auto-configuring state-of-the-art inference engines for both NPUs and GPUs. The project positions itself as a “local LLM server” you can run on laptops and workstations, abstracting away backend differences while giving you a single place to serve and manage models. Its README emphasizes real-world adoption across startups, research groups, and large companies, signaling a focus on...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 20
    mosaicml composer

    mosaicml composer

    Supercharge Your Model Training

    composer is a deep learning training framework built on PyTorch and designed to make large-scale model training more efficient, scalable, and customizable. At the center of the project is a highly optimized Trainer abstraction that simplifies the management of training loops, parallelization, metrics, logging, and data loading. The framework is intended for modern workloads that may span anything from a single GPU to very large distributed training environments, which makes it suitable for...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    TensorRT LLM

    TensorRT LLM

    TensorRT LLM provides users with an easy-to-use Python API

    TensorRT-LLM is an open-source high-performance inference library specifically designed to optimize and accelerate large language model deployment on NVIDIA GPUs. It provides a Python-based API built on top of PyTorch that allows developers to define, customize, and deploy LLMs efficiently across a variety of hardware configurations, from single GPUs to large multi-node clusters. The library focuses on maximizing throughput and minimizing latency through advanced techniques such as...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    Radeon-ID

    Radeon-ID

    Official Mirror for AMD Domestic Community Drivers Indonesia

    AMD Domestic Community Driver with Multi-Kernel Radeon Driver Release Experience enhanced performance and flexibility with our multi-kernel Radeon driver, tailored for the AMD community. 📢 Need help or have questions? Join our 24/7 Discord support channel for real-time assistance and discussions: https://discord.gg/rdnid
    Leader badge
    Downloads: 4,735 This Week
    Last Update:
    See Project
  • 23
    MSI Afterburner

    MSI Afterburner

    MSI Afterburner: Overclock, monitor, and optimize your GPU.

    ...Furthermore, MSI Afterburner supports video recording, enabling users to capture their gameplay or overclocking sessions with ease. Its compatibility with all major graphics card brands makes it an indispensable tool for anyone looking to push their hardware to the limit.
    Downloads: 73 This Week
    Last Update:
    See Project
  • 24
    Stable Diffusion web UI for AMDGPUs

    Stable Diffusion web UI for AMDGPUs

    Stable Diffusion WebUI optimized for AMD GPUs with editing tools

    ...A one-click setup script simplifies installation, although Python and Git are still required. Stable Diffusion WebUI AMDGPU focuses on improving accessibility for AMD GPU users, offering an alternative to CUDA-based implementations while maintaining compatibility with many existing Stable Diffusion capabilities and extensions.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 25
    CUDA-QX

    CUDA-QX

    Accelerated libraries for quantum-classical computing built on CUDA-Q

    CUDA-QX is a collection of accelerated libraries built on top of the CUDA-Q platform, designed to enable rapid development of hybrid quantum-classical applications. It extends the CUDA-Q programming model by providing optimized implementations of domain-specific quantum computing primitives and workflows. The libraries are intended to help researchers and developers leverage GPUs, CPUs, and quantum processing units together in a unified computational model. CUDA-QX focuses on key areas such...
    Downloads: 2 This Week
    Last Update:
    See Project