Showing 47 open source projects for "nvidia"

View related business solutions
  • VoIP Business Phone Systems Icon
    VoIP Business Phone Systems

    Hosted VoIP service for businesses. Feature-rich. Awesome service.

    Intulse’s VoIP internet phone and connectivity solutions are designed to increase productivity through VoIP software dashboard apps, VoIP CRM integration (ex. Salesforce & HubSpot), softphones, time-based routing, virtual receptionists, customized greetings, unlimited extensions, and more. Customers who choose Intulse as their cloud VoIP provider get the power of a feature-rich system that can be customized to best-fit how they do business. Intulse’s communications specialists work closely to help select and implement features to ensure each business gets the most out of their system and services.
    Get Started
  • Non Emergency Medical Transportation (NEMT) Software Icon
    Non Emergency Medical Transportation (NEMT) Software

    Healthcare providers in search of a scheduling and dispatch solution for non emergency medical transportation

    NovusMED is an ecosystem that includes call center, administrative, driver applications, and client/clinic booking applications. NovusMED is the platform of choice for a wide range of medical transportation services and includes configurations for brokerage, providers, senior, community, and home health programs. Accurately manage calls and patient information. Monitor real-time performance and adjust resource capacity to meet changes in service demand. Manage will calls, confirmation calls, and recurring trips/standing orders in real time. Improved mileage reimbursement and cost calculators to manage multiple contractors, funding sources (payors), multiple providers, and volunteer driver programs. Enhanced credential management for vehicles and drivers. Manage subcontractor outsourcing with provider mobile, trip bidding, and trip offers. Able to see the closest vehicle and perform immediate bookings.
    Learn More
  • 1
    NVIDIA GPU Operator

    NVIDIA GPU Operator

    NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes

    ...These components include the NVIDIA drivers (to enable CUDA), Kubernetes device plugin for GPUs, the NVIDIA Container Runtime, automatic node labeling, DCGM-based monitoring, and others.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    NVIDIA device plugin for Kubernetes

    NVIDIA device plugin for Kubernetes

    NVIDIA device plugin for Kubernetes

    The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automatically Expose the number of GPUs on each node of your cluster. Keep track of the health of your GPUs. Run GPU-enabled containers in your Kubernetes cluster.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    HyDE Linux

    HyDE Linux

    Aesthetic, dynamic and minimal dots for Arch hyprland

    ...While installing HyDE alongside another DE/WM should work, due to it being a heavily customized setup, it will conflict with your GTK/Qt theming, Shell, SDDM, GRUB, etc., and is at your own risk. The install script will auto-detect an NVIDIA card and install nvidia-dkms drivers for your kernel.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    TensorRT Node for ComfyUI

    TensorRT Node for ComfyUI

    Enables the best performance on NVIDIA RTX Graphics Cards

    ...This is particularly attractive for power users who run many generations or who host ComfyUI on dedicated hardware and want to squeeze out every bit of GPU performance. In short, it’s about taking ComfyUI from “it runs” to “it runs fast” on NVIDIA GPUs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Advertise and grow confidently Icon
    Advertise and grow confidently

    TrafficGuard is for any business advertising on Google Ads to ensure their ad spend delivers genuine advertising engagement

    TrafficGuard's PPC protection makes it easy for businesses of all sizes to protect their Google Ads campaigns from click fraud.
    Learn More
  • 5
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    ...TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. With new NVIDIA Ampere Architecture GPUs, TensorRT also leverages sparse tensor cores providing an additional performance boost.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 6
    nviwatch

    nviwatch

    A blazingly fast rust based TUI for managing and monitoring NVIDIA GPU

    NviWatch is an interactive terminal user interface (TUI) application for monitoring NVIDIA GPU devices and processes. Built with Rust, it provides real-time insights into GPU performance metrics, including temperature, utilization, memory usage, and power consumption.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ...ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 27 This Week
    Last Update:
    See Project
  • 8
    Isaac ROS Visual SLAM

    Isaac ROS Visual SLAM

    Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM

    Discover a faster, easier way to build advanced AI robotics applications with the NVIDIA Isaac™ ROS collection of accelerated computing packages and AI models, bringing NVIDIA acceleration to ROS developers everywhere. Isaac ROS Visual SLAM provides a high-performance, best-in-class ROS 2 package for VSLAM (visual simultaneous localization and mapping). This package uses one or more stereo cameras and optionally an IMU to estimate odometry as an input to navigation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    waifu2x ncnn Vulkan

    waifu2x ncnn Vulkan

    waifu2x converter ncnn version, run fast GPU with vulkan

    ncnn implementation of waifu2x converter. Runs fast on Intel/AMD/Nvidia/Apple-Silicon with Vulkan API. waifu2x-ncnn-vulkan uses ncnn project as the universal neural network inference framework.
    Downloads: 13 This Week
    Last Update:
    See Project
  • Our xDM platform turns business users into data champions. Icon
    Our xDM platform turns business users into data champions.

    Discover the Intelligent Data Hub unique platform for Master Data Management

    It empowers organizations of any size to build trusted data applications quickly, with fast time to value using a single software platform for governance, master data, reference data, data quality, enrichment, and workflows.
    Learn More
  • 10
    FlashMLA

    FlashMLA

    FlashMLA: Efficient Multi-head Latent Attention Kernels

    FlashMLA is a high-performance decoding kernel library designed especially for Multi-Head Latent Attention (MLA) workloads, targeting NVIDIA Hopper GPU architectures. It provides optimized kernels for MLA decoding, including support for variable-length sequences, helping reduce latency and increase throughput in model inference systems using that attention style. The library supports both BF16 and FP16 data types, and includes a paged KV cache implementation with a block size of 64 to efficiently manage memory during decoding. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    WhiteSur GTK Theme

    WhiteSur GTK Theme

    MacOS like theme for all gtk based desktops

    WhiteSur-gtk-theme brings a macOS Big Sur–inspired look to Linux desktops by providing a polished GTK theme with light and dark variants, rounded shapes, and refined translucency. It includes assets and installer scripts to apply the theme across GTK applications and desktop shells that support GTK theming, aiming for a cohesive, high-contrast interface. The project pays attention to details like window controls, titlebars, selection states, and widget hover effects so apps feel consistent...
    Downloads: 12 This Week
    Last Update:
    See Project
  • 12
    CuPy

    CuPy

    A NumPy-compatible array library accelerated by CUDA

    CuPy is an open source implementation of NumPy-compatible multi-dimensional array accelerated with NVIDIA CUDA. It consists of cupy.ndarray, a core multi-dimensional array class and many functions on it. CuPy offers GPU accelerated computing with Python, using CUDA-related libraries to fully utilize the GPU architecture. According to benchmarks, it can even speed up some operations by more than 100X. CuPy is highly compatible with NumPy, serving as a drop-in replacement in most cases. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    CUDA API Wrappers

    CUDA API Wrappers

    Thin, unified, C++-flavored wrappers for the CUDA APIs

    CUDA API Wrappers is a C++ library providing high-level, modern wrappers for NVIDIA’s CUDA runtime and driver APIs, enhancing usability and efficiency. It is intended for those who would otherwise use these APIs directly, to make working with them more intuitive and consistent, making use of modern C++ language capabilities, programming idioms, and best practices. In a nutshell - making CUDA API work more fun.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    oneDNN

    oneDNN

    oneAPI Deep Neural Network Library (oneDNN)

    ...The library is optimized for Intel(R) Architecture Processors, Intel Processor Graphics and Xe Architecture graphics. oneDNN has experimental support for the following architectures: Arm* 64-bit Architecture (AArch64), NVIDIA* GPU, OpenPOWER* Power ISA (PPC64), IBMz* (s390x), and RISC-V. oneDNN is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. Deep learning practitioners should use one of the applications enabled with oneDNN.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    Transformers4Rec

    Transformers4Rec

    Transformers4Rec is a flexible and efficient library

    Transformers4Rec is an advanced recommendation system library that leverages Transformer models for sequential and session-based recommendations. The library works as a bridge between natural language processing (NLP) and recommender systems (RecSys) by integrating with one of the most popular NLP frameworks, Hugging Face Transformers (HF). Transformers4Rec makes state-of-the-art transformer architectures available for RecSys researchers and industry practitioners. Traditional recommendation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DALI

    DALI

    A GPU-accelerated library containing highly optimized building blocks

    The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provides a collection of highly optimized building blocks for loading and processing image, video and audio data. It can be used as a portable drop-in replacement for built-in data loaders and data iterators in popular deep learning frameworks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    libfabric

    libfabric

    AWS Libfabric

    ...Its custom-built operating system (OS) bypass hardware interface enhances the performance of inter-instance communications, which is critical to scaling these applications. With EFA, High Performance Computing (HPC) applications using the Message Passing Interface (MPI) and Machine Learning (ML) applications using NVIDIA Collective Communications Library (NCCL) can scale to thousands of CPUs or GPUs. As a result, you get the application performance of on-premises HPC clusters with the on-demand elasticity and flexibility of the AWS cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Tiny CUDA Neural Networks

    Tiny CUDA Neural Networks

    Lightning fast C++/CUDA neural network framework

    This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning-fast "fully fused" multi-layer perceptron (technical paper), a versatile multiresolution hash encoding (technical paper), as well as support for various other input encodings, losses, and optimizers. We provide a sample application where an image function (x,y) -> (R,G,B) is learned. The fully fused MLP component of this framework requires a very large amount of shared...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference, transforms etc. They've been tested for machine learning workloads on Amazon EC2, Amazon ECS and Amazon EKS services as well. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    DeepPavlov

    DeepPavlov

    A library for deep learning end-to-end dialog systems and chatbots

    DeepPavlov makes it easy for beginners and experts to create dialogue systems. The best place to start is with user-friendly tutorials. They provide quick and convenient introduction on how to use DeepPavlov with complete, end-to-end examples. No installation needed. Guides explain the concepts and components of DeepPavlov. Follow step-by-step instructions to install, configure and extend DeepPavlov framework for your use case. DeepPavlov is an open-source framework for chatbots and virtual...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    cuDF

    cuDF

    GPU DataFrame Library

    ...For additional examples, browse our complete API documentation, or check out our more detailed notebooks. cuDF can be installed with conda (miniconda, or the full Anaconda distribution) from the rapidsai channel. cuDF is supported only on Linux, and with Python versions 3.7 and later. The RAPIDS suite of open-source software libraries aims to enable the execution of end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization but exposing that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Totally Built Linux Distro

    Totally Built Linux Distro

    build linux from termcap to GTK-4.7 100% non-stop

    Build all of a light linux distro non-stop and no buildfails with just a few commands. (not including un-tarring initial DL) Current Build Time: 3 to 4.7 hours* on an iCore7 using -j8 It's all automatic from Termcap to Gtk-4 and making new USB, it's new enough run firefox 20-130 and Mathematica 4 - 14. Minimal commandlines and configuration required; just a few in itself. Cross Compiles i386 to x86_64 (note not a canadian cross). ~500 pkgs built including configured database...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    NVIDIA Container Toolkit

    NVIDIA Container Toolkit

    Build and run Docker containers leveraging NVIDIA GPUs

    The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs. Make sure you have installed the NVIDIA driver and Docker engine for your Linux distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    Thrust

    Thrust

    The C++ parallel algorithms library

    ...It builds on top of established parallel programming frameworks (such as CUDA, TBB, and OpenMP). It also provides a number of general-purpose facilities similar to those found in the C++ Standard Library. The NVIDIA C++ Standard Library is an open-source project; it is available on GitHub and included in the NVIDIA HPC SDK and CUDA Toolkit. If you have one of those SDKs installed, no additional installation or compiler flags are needed to use libcu++. Thrust is a header-only library; there is no need to build or install the project unless you want to run the Thrust unit tests.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    Knet

    Knet

    Koç University deep learning framework

    Knet.jl is a deep learning package implemented in Julia, so you should be able to run it on any machine that can run Julia. It has been extensively tested on Linux machines with NVIDIA GPUs and CUDA libraries, and it has been reported to work on OSX and Windows. If you would like to try it on your own computer, please follow the instructions on Installation. If you would like to try working with a GPU and do not have access to one, take a look at Using Amazon AWS or Using Microsoft Azure. If you find a bug, please open a GitHub issue. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next