Search Results for "gpu processing" - Page 5

Showing 116 open source projects for "gpu processing"

View related business solutions
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 1

    Pixma Frontpanel Library

    Library for using Canon Pixma inkjet printer frontpanels

    This library makes it possible to reuse the LCD/button frontpanels of Canon Pixma MP620 / MP630 inkjet printers in your own projects. The library makes use of the processing power and RAM that is built into the frontpanels. The library is written for maple / olimexino but might be expanded in the future. Please see the wiki for info on how the display works, the protocol used and how to connect your display to your board: http://sourceforge.net/p/pixmafrontpanel/wiki/Home/
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    GPU-accelerated LIBSVM is a modification of the original LIBSVM that exploits the CUDA framework to significantly reduce processing time while producing identical results. The functionality and interface of LIBSVM remains the same.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    GPUBench2 is a cross platform suite to analyze the performance of GPU( Graphics Processing Unit). GPUBench2 will gather of state-of-the-art parameters from different interfaces ( OpenGL, Cg, CUDA) available.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Qaquarelle is the opensource Qt4-based graphical editor, whose goal is to provide the native way of painting with emulated traditional instruments, including the full support of tablet input and OpenGL-based processing in GPU.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 5
    This code is provided as supplementary material for the book chapter "Exploiting graphics processing units for computational biology and bioinformatics," by Payne, Sinnott-Armstrong, and Moore.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    The GWO library is numerical calculation library for the diffraction integrals using a GPU (Graphics Processing Unit). If optics engineers and researchers have no knowledge of GPU, the GWO library provides them with the GPU calculation power easily.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    BrokenFinger is a collection of demo 3D programs showing latest implementation in the areas of: Document Object Model, GPU Bound Processing, and Procedural Modeling(e.g. CGA Shape).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    proGPUKLT is a library for the Processing programming language and environment that wraps a GPU-implementation of the Kanade-Lucas-Tomasi feature tracker used for computer vision applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    An OS/hardware vendor independent GPU accelerated image and video processing library written in C. Interface allows easy combination and manipulation of customizable filters. Works with an active OpenGL context.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Run applications fast and securely in a fully managed environment Icon
    Run applications fast and securely in a fully managed environment

    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of scalable infrastructure.

    Run frontend and backend services, batch jobs, deploy websites and applications, and queue processing workloads without the need to manage infrastructure.
    Try for free
  • 10
    GLEWpy aims to bring advanced OpenGL extensions to Python. This allows the Python OpenGL developer to use features such as fragment/vertex shaders and image processing on the GPU. It serves as a compliment to PyOpenGL and toolkits such as GLUT and SDL.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    GPUVision is a framework for creating GPU based general purpose programs, image processing programs, and computer vision programs in C++. Supported libraries include matrix operations, graph partitioning, kernels, corner detection, edge detection etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Zen3D

    Zen3D

    Zen3D is a 3D engine and editor for creating games

    Zen3D is a full 3D engine and editor to create games. It supports DX12/11/GL/Vulkan & Metal. It has a bespoke scripting language called ZenScript. It includes RTX support for Raytraced or Hybrid rendering.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    mms-300m-1130-forced-aligner

    mms-300m-1130-forced-aligner

    CTC-based forced aligner for audio-text in 158 languages

    ...Unlike other tools, it provides significant memory efficiency compared to the TorchAudio forced alignment API. Users can integrate it easily through the Python package ctc-forced-aligner, and it supports GPU acceleration via PyTorch. The alignment pipeline includes audio processing, emission generation, tokenization, and span detection, making it suitable for speech analysis, transcription syncing, and dataset creation. This model is especially useful for researchers and developers working with low-resource languages or building multilingual speech systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Ministral 3 8B Instruct 2512

    Ministral 3 8B Instruct 2512

    Compact 8B multimodal instruct model optimized for edge deployment

    Ministral 3 8B Instruct 2512 is a balanced, efficient model in the Ministral 3 family, offering strong multimodal capabilities within a compact footprint. It combines an 8.4B-parameter language model with a 0.4B vision encoder, enabling both text reasoning and image understanding. This FP8 instruct-fine-tuned variant is optimized for chat, instruction following, and structured outputs, making it ideal for daily assistant tasks and lightweight agentic workflows. Designed for edge deployment,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Ministral 3 3B Base 2512

    Ministral 3 3B Base 2512

    Small 3B-base multimodal model ideal for custom AI on edge hardware

    ...As the base pretrained model, it is not fine-tuned for instructions or reasoning, making it the ideal foundation for custom post-training, domain adaptation, or specialized downstream tasks. The model is fully optimized for edge deployment and can run locally on a single GPU, fitting in 16GB VRAM in BF16 or less than 8GB when quantized. It supports dozens of languages, making it practical for multilingual, global, or distributed environments. With a large 256k token context window, it can handle long documents, extended inputs, or multi-step processing workflows even at its small size.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Ministral 3 14B Instruct 2512

    Ministral 3 14B Instruct 2512

    Efficient 14B multimodal instruct model with edge deployment and FP8

    Ministral 3 14B Instruct 2512 is the largest model in the Ministral 3 family, delivering frontier performance comparable to much larger systems while remaining optimized for edge-level deployment. It combines a 13.5B-parameter language model with a 0.4B-parameter vision encoder, enabling strong multimodal understanding in both text and image tasks. This FP8 instruct-tuned variant is designed specifically for chat, instruction following, and agentic workflows with robust system-prompt...
    Downloads: 0 This Week
    Last Update:
    See Project