Showing 18 open source projects for "gpu faster"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 1
    Xenia Canary

    Xenia Canary

    Xbox 360 Emulator Research Project

    Xenia Canary is an experimental fork of the Xenia Xbox 360 emulator that moves faster than the mainline project to trial bleeding-edge improvements. It focuses on game compatibility and performance by iterating quickly on GPU and CPU emulation paths, shader translation, and timing correctness. Canary builds are where risky optimizations, new backends, and rewrites land first so they can be tested by a wider community before stabilizing.
    Downloads: 118 This Week
    Last Update:
    See Project
  • 2
    Citron Neo

    Citron Neo

    Research software designed to orchestrate virtual environments

    Citron Neo is an advanced emulator project focused on replicating complex system environments with high performance and flexibility. It is designed to emulate modern console behavior while integrating improvements in CPU emulation, GPU rendering, and memory management. The project incorporates optimizations such as dynamic recompilation and Vulkan-based rendering to enhance performance across supported platforms. It also includes continuous updates that improve compatibility with games and system firmware, reflecting an active development cycle. Citron aims to provide a refined user experience through UI enhancements, faster loading times, and better resource handling. ...
    Downloads: 74 This Week
    Last Update:
    See Project
  • 3
    cuML

    cuML

    RAPIDS Machine Learning Library

    cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn. For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the cuML Benchmarks Notebook.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    ...The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc. The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers (INT16), and 8-bit integers (INT8). ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Earn up to 16% annual interest with Nexo. Icon
    Earn up to 16% annual interest with Nexo.

    More flexibility. More control.

    Generate interest, access liquidity without selling, and execute trades seamlessly. All in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • 5
    MNN

    MNN

    MNN is a blazing fast, lightweight deep learning framework

    MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models, and has industry leading performance for inference and training on-device. At present, MNN has been integrated in more than 20 apps of Alibaba Inc, such as Taobao, Tmall, Youku, Dingtalk, Xianyu and etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    ncnn

    ncnn

    High-performance neural network inference framework for mobile

    ncnn is a high-performance neural network inference computing framework designed specifically for mobile platforms. It brings artificial intelligence right at your fingertips with no third-party dependencies, and speeds faster than all other known open source frameworks for mobile phone cpu. ncnn allows developers to easily deploy deep learning algorithm models to the mobile platform and create intelligent APPs. It is cross-platform and supports most commonly used CNN networks, including...
    Downloads: 36 This Week
    Last Update:
    See Project
  • 7
    Isaac ROS Visual SLAM

    Isaac ROS Visual SLAM

    Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM

    Discover a faster, easier way to build advanced AI robotics applications with the NVIDIA Isaac™ ROS collection of accelerated computing packages and AI models, bringing NVIDIA acceleration to ROS developers everywhere. Isaac ROS Visual SLAM provides a high-performance, best-in-class ROS 2 package for VSLAM (visual simultaneous localization and mapping).
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers,...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 9
    Tiny CUDA Neural Networks

    Tiny CUDA Neural Networks

    Lightning fast C++/CUDA neural network framework

    This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning-fast "fully fused" multi-layer perceptron (technical paper), a versatile multiresolution hash encoding (technical paper), as well as support for various other input encodings, losses, and optimizers. We provide a sample application where an image function (x,y) -> (R,G,B) is learned. The fully fused MLP component of this framework requires a very large amount of shared...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 10
    TSNE-CUDA

    TSNE-CUDA

    GPU Accelerated t-SNE for CUDA with Python bindings

    This repo is an optimized CUDA version of FIt-SNE algorithm with associated python modules. We find that our implementation of t-SNE can be up to 1200x faster than Sklearn, or up to 50x faster than Multicore-TSNE when used with the right GPU. You can install binaries with anaconda for CUDA version 10.1 and 10.2 using conda install tsnecuda -c conda-forge. Tsnecuda supports CUDA versions 9.0 and later through source installation, check out the wiki for up to date installation instructions. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    ...Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes obligatory when running a model. Mechanism like automatically breaking OpenCL kernel into small units is introduced to allow better preemption for the UI rendering task. Graph level memory allocation optimization and buffer reuse are supported. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    YOLO ROS

    YOLO ROS

    YOLO ROS: Real-Time Object Detection for ROS

    ...Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13

    SoAx

    Structure of Arrays of multiple types

    Structures of arrays (SoA) are generally faster than arrays of structures (AoS) while AoS are more handy. This project (SoAx) combines the advantages of both. By means of C++(11) meta-template programming SoAx achieves maximal performance (efficient use of vector units and cache of modern CPUs) while providing a very convenient user interface (including object-oriented element handling) and flexibility. It has been designed to handle list-like sets of particles (similar to struct {int id;...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    SOAP3-DP

    SOAP3-DP

    Fast, Accurate and Sensitive GPU-based Short Read Aligner

    Latest Code on GitHub: https://github.com/aquaskyline/SOAP3-dp SOAP3-dp, through leveraging the computational power of both CPU and GPU with optimized algorithms, delivers high speed and sensitivity simultaneously. Compared with widely adopted aligners including BWA, Bowtie2, SeqAlto, CUSHAW2, GEM and GPU-based aligners BarraCUDA and CUSHAW, SOAP3-dp was found to be two to tens of times faster, while maintaining the highest sensitivity and lowest false discovery rate (FDR) on Illumina reads with different lengths. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    MICA-aligner

    MICA-aligner

    Next-generation sequencing short reads aligner based on Intel® MIC

    Latest Code in GitHub: https://github.com/aquaskyline/MICA-aligner To better utilize MIC-enabled computers for NGS data analysis, we developed a new short-read aligner MICA that is optimized in view of MIC’s limitation and the extra parallelism inside each MIC core. Experiments on aligning 150bp paired-end reads show that MICA using one MIC board is ~4.85 times faster than the CPU-(multi-core)-based BWA-MEM and about the same speed as the GPU-based SOAP3-dp. Furthermore, MICA’s simplicity allows very efficient scale-up when multiple MIC boards are used in a node (3 cards gives a 14-fold speedup over 6-core BWA-MEM).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Simbuca
    SIMBUCA (before called Simonion) is a simulation package that simulates the motion of charged particles under the influence of Electric and/or Magnetic fields. What makes Simbuca unique is that you can choose to calculate the Coulomb interaction between ions on a Graphics cards which is much faster than calculating this on the conventional CPU (reducing the simulation time from years to days). Therefore Simbuca has been applied in various projects which required to understand the...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Simulate the optical reflectance from an infinite turbid medium under an ideal oblique incidence optical source. Two versions are implemented: CPU and GPU. They both generate statistically the same results but GPU version works much faster.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Particular filter CUDA

    Particular filter CUDA

    Improvements of positioning algorithms using CUDA

    Our project consist in porting positioning algorithms on a GPU. We will improve programs which are already working on CPU in order to make them compatible with the CUDA technology offered by Nvidia. The advantage of this technology is that it allows us to use massive multithreading and so make calculations go faster. Algorithms will be implemented in C++.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB