16 projects for "gpu process" with 1 filter applied:

  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 1
    GPU Hot

    GPU Hot

    Real-time NVIDIA GPU dashboard

    GPU Hot is an open-source, lightweight monitoring dashboard designed to provide real-time visibility into NVIDIA GPU performance across single machines or entire clusters. The project offers a self-hosted web interface that streams hardware metrics directly from GPU servers, enabling developers, ML engineers, and system administrators to observe GPU utilization and system behavior in real time through a browser. The dashboard collects and displays a wide range of performance metrics...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    Matcha-TTS

    Matcha-TTS

    A fast TTS architecture with conditional flow matching

    ...Users can train on standard datasets like LJSpeech or plug in their own corpora, with helper tools for computing dataset statistics, extracting phoneme durations, and running multi-GPU training.
    Downloads: 16 This Week
    Last Update:
    See Project
  • 3
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    GPT4All is an open-source project that allows users to run large language models (LLMs) locally on their desktops or laptops, eliminating the need for API calls or GPUs. The software provides a simple, user-friendly application that can be downloaded and run on various platforms, including Windows, macOS, and Ubuntu, without requiring specialized hardware. It integrates with the llama.cpp implementation and supports multiple LLMs, allowing users to interact with AI models privately. This...
    Downloads: 163 This Week
    Last Update:
    See Project
  • 4
    AI YouTube Shorts Generator

    AI YouTube Shorts Generator

    A python tool that uses GPT-4, FFmpeg, and OpenCV

    AI-YouTube-Shorts-Generator is a Python-based tool that automates the creation of short-form vertical video clips (“shorts”) from longer source videos — ideal for adapting content for platforms like YouTube Shorts, Instagram Reels, or TikTok. It analyzes input video (whether a local file or a YouTube URL), transcribes audio (with optional GPU-accelerated speech-to-text), uses an AI model to identify the most compelling or engaging segments, and then crops/resizes the video and applies...
    Downloads: 15 This Week
    Last Update:
    See Project
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 5
    Tracy Profiler

    Tracy Profiler

    Frame profiler

    A real-time, nanosecond resolution, remote telemetry, hybrid frame, and sampling profiler for games and other applications. Tracy supports profiling CPU (Direct support is provided for C, C++, Lua and Python integration. At the same time, third-party bindings to many other languages exist on the internet, such as Rust, Zig, C#, OCaml, Odin, etc.), GPU (All major graphic APIs: OpenGL, Vulkan, Direct3D 11/12, OpenCL.), memory allocations, locks, context switches, automatically attribute...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 6
    CUDA Containers for Edge AI & Robotics

    CUDA Containers for Edge AI & Robotics

    Machine Learning Containers for NVIDIA Jetson and JetPack-L4T

    ...These containers simplify the deployment of complex machine learning environments by bundling libraries such as CUDA, TensorRT, and deep learning frameworks into reproducible container images. The project is particularly useful for developers building edge AI and robotics systems that rely on GPU-accelerated inference and real-time computer vision. By using containerized environments, developers can ensure that their applications run consistently across different Jetson platforms and JetPack versions. The repository also includes build tools and package management utilities that help automate the process of assembling machine learning environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    local-llm

    local-llm

    Run LLMs locally on Cloud Workstations

    local-llm is a development framework that enables developers to run large language models locally within Google Cloud Workstations or standard environments without requiring GPU hardware. It focuses on making generative AI development more accessible by leveraging quantized models and CPU-based execution, eliminating the dependency on expensive GPU infrastructure. The repository includes tools, Docker configurations, and command-line utilities that simplify the process of downloading, running, and interacting with language models directly on local or cloud-based workstations. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Firefly LLM

    Firefly LLM

    A large model training tool that supports training large models

    Firefly is an open-source framework designed to simplify the training and fine-tuning of large language models through a unified and configurable workflow. The project provides a comprehensive environment where developers can perform tasks such as model pre-training, instruction tuning, and preference optimization using widely adopted machine learning techniques. Its architecture supports both full-parameter training and parameter-efficient strategies like LoRA and QLoRA, making it suitable...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    text-generation-webui-colab

    text-generation-webui-colab

    A colab gradio web UI for running Large Language Models

    text-generation-webui-colab is a repository that provides Google Colab notebooks designed to simplify the process of running large language models through the popular text-generation-webui interface. The project automates the setup and deployment of AI models in cloud-based notebook environments, allowing users to experiment with text generation systems without configuring complex local environments. By leveraging Google Colab, the repository enables users to run open-source models such as LLaMA-based systems and other instruction-tuned models using accessible GPU resources. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 10
    Neural Tangents

    Neural Tangents

    Fast and Easy Infinite Neural Networks in Python

    Neural Tangents is a high-level neural network API for specifying complex, hierarchical models at both finite and infinite width, built in Python on top of JAX and XLA. It lets researchers define architectures from familiar building blocks—convolutions, pooling, residual connections, and nonlinearities—and obtain not only the finite network but also the corresponding Gaussian Process (GP) kernel of its infinite-width limit. With a single specification, you can compute NNGP and NTK kernels,...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 11
    Rio

    Rio

    A hardware-accelerated GPU terminal emulator powered by WebGPU.

    Rio is a terminal application that’s built with Rust, WebGPU, Tokio runtime. It targets to have the best frame per second experience as long you want, but is also configurable to use as minimal from GPU. It also relies on Rust memory behavior, since Rust is a memory-safe language that employs The terminal renderer is based on redux state machine, lines that has not updated will not suffer a redraw. Looking for the minimal rendering process in most of the time. Rio is also designed to support WebAssembly runtime so in the future you will be able to define how a tab system will work with a WASM plugin written in your favorite language. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    Stable Diffusion

    Stable Diffusion

    A latent text-to-image diffusion model

    Stable Diffusion is a widely used open-source latent text-to-image diffusion model developed by the CompVis group for generating high-quality images from natural language prompts. The model operates by conditioning a diffusion process on text embeddings produced by a CLIP text encoder, enabling detailed and controllable image synthesis. It was trained on large-scale image datasets and later fine-tuned to produce 512×512 images with strong visual fidelity. Because the system runs efficiently...
    Downloads: 36 This Week
    Last Update:
    See Project
  • 13
    What happens when

    What happens when

    What happens when you type google into your browser and press enter?

    What happens when is a large collaborative documentation-style project that aims to answer in exhaustive detail the canonical interview/thought experiment question, “What happens when you type google into your browser and press Enter?” Rather than giving a high-level overview, the repository tries to break down every step in the process, from low-level events (keyboard press, OS events, keyboard interrupts), through OS-level handling (keyboard scan codes, key events), parsing, DNS lookup, networking (ARP, socket creation, TCP/TLS handshake), HTTP requests, browser behavior, HTML/CSS/JS parsing, rendering engine, GPU rendering, layout, to final drawing and user-visible output. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    LUMINOTH

    LUMINOTH

    Deep Learning toolkit for Computer Vision

    LUMINOTH is an open-source deep learning toolkit designed for computer vision tasks, particularly object detection. The framework is implemented in Python and built on top of TensorFlow and the Sonnet neural network library, providing a modular environment for training and deploying detection models. It was created to simplify the process of building and experimenting with deep learning models capable of identifying objects within images. Luminoth includes support for popular object...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15

    Accelerated Feature Extraction Tool

    A fast GPU accelerated feature extraction software for speech analysis

    A fast feature extraction software tool for speech analysis and processing. It incorporates standard MFCC, PLP, and TRAPS features. The tool is a specially designed to process very large audio data sets. It uses GPU acceleration if compatible GPU available (CUDA as weel as OpenCL, NVIDIA, AMD, and Intel GPUs are supported). CPU SSE intrinsic instruction set is used in cases where no compatible GPU present. The output files are stored in HTK format. The software is developed at Department of Cybernetics at University of West Bohemia in Pilsen.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Ministral 3 8B Reasoning 2512

    Ministral 3 8B Reasoning 2512

    Efficient 8B multimodal model tuned for advanced reasoning tasks.

    Ministral 3 8B Reasoning 2512 is a balanced midsize model in the Ministral 3 family, delivering strong multimodal reasoning capabilities within an efficient footprint. It combines an 8.4B-parameter language model with a 0.4B vision encoder, enabling it to process both text and images for advanced reasoning tasks. This version is specifically post-trained for reasoning, making it well-suited for math, coding, and STEM applications requiring multi-step logic and problem-solving. Despite its reasoning-focused training, the model remains edge-optimized and can run locally on a single 24GB GPU in BF16, or under 12GB when quantized. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB