Showing 7 open source projects for "webgpu"

View related business solutions
  • Achieve perfect load balancing with a flexible Open Source Load Balancer Icon
    Achieve perfect load balancing with a flexible Open Source Load Balancer

    Take advantage of Open Source Load Balancer to elevate your business security and IT infrastructure with a custom ADC Solution.

    Boost application security and continuity with SKUDONET ADC, our Open Source Load Balancer, that maximizes IT infrastructure flexibility. Additionally, save up to $470 K per incident with AI and SKUDONET solutions, further enhancing your organization’s risk management and cost-efficiency strategies.
  • High-performance Open Source API Gateway Icon
    High-performance Open Source API Gateway

    KrakenD is a stateless, distributed, high-performance API Gateway that helps you effortlessly adopt microservices

    KrakenD is a high-performance API Gateway optimized for resource efficiency, capable of managing 70,000 requests per second on a single instance. The stateless architecture allows for straightforward, linear scalability, eliminating the need for complex coordination or database maintenance.
  • 1
    wgpu

    wgpu

    Safe and portable GPU abstraction in Rust, implementing WebGPU API

    wgpu is a safe and portable graphics library for Rust based on the WebGPU API. It is suitable for general purpose graphics and compute on the GPU. Applications using wgpu run natively on Vulkan, Metal, DirectX 11/12, and OpenGL ES; and browsers via WebAssembly on WebGPU and WebGL2. Angle is a translation layer from GLES to other backends, developed by Google. We support running our GLES3 backend over it in order to reach platforms with GLES2 or DX11 support, which aren't accessible otherwise...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    react-llm

    react-llm

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGP

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. As simple as useLLM().
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    ChatLLM Web

    ChatLLM Web

    Chat with LLM like Vicuna totally in your browser with WebGPU

    Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered By web-llm. To use this app, you need a browser that supports WebGPU, such as Chrome 113 or Chrome Canary. Chrome versions ≤ 112 are not supported. You will need a GPU with about 6.4GB of memory. If your GPU has less memory, the app will still run, but the response time will be slower. The first time you use the app, you will need to download the model. For the Vicuna-7b model that we...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Whisper Turbo

    Whisper Turbo

    Cross-Platform, GPU Accelerated Whisper

    Whisper Turbo is a fast, cross-platform Whisper implementation, designed to run entirely client-side in your browser/electron app.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Speech-to-Text: Automatic Speech Recognition Icon
    Speech-to-Text: Automatic Speech Recognition

    Accurately convert voice to text in over 125 languages and variants by applying Google's powerful machine learning models with an easy-to-use API.

    New customers get $300 in free credits to spend on Speech-to-Text. All customers get 60 minutes for transcribing and analyzing audio free per month, not charged against your credits.
  • 5
    luma.gl

    luma.gl

    High-performance Toolkit for WebGL-based data visualization

    luma.gl is a GPU toolkit for the Web-focused primarily on data visualization use cases. luma.gl aims to provide support for GPU programmers that need to work directly with shaders and want a low abstraction API that remains conceptually close to the WebGPU and WebGL APIs. Unlike other common WebGL APIs, the developer can choose to use the parts of luma.gl that support their use case and leave the others behind. While generic enough to be used for general 3D rendering, luma.gl's mandate...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Web LLM

    Web LLM

    Bringing large-language models and chat to web browsers

    WebLLM is a modular, customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration. WebLLM offers a minimalist and modular interface to access the chatbot in the browser. The WebLLM package itself does not come...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Rio

    Rio

    A hardware-accelerated GPU terminal emulator powered by WebGPU.

    Rio is a terminal application that’s built with Rust, WebGPU, Tokio runtime. It targets to have the best frame per second experience as long you want, but is also configurable to use as minimal from GPU. It also relies on Rust memory behavior, since Rust is a memory-safe language that employs The terminal renderer is based on redux state machine, lines that has not updated will not suffer a redraw. Looking for the minimal rendering process in most of the time. Rio is also designed...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next