Showing 984 open source projects for "gpu-z"

View related business solutions
  • Automated quote and proposal software for IT solution providers. | ConnectWise CPQ Icon
    Automated quote and proposal software for IT solution providers. | ConnectWise CPQ

    Create IT quote templates, automate workflows, add integrations & price catalogs to save time & reduce errors on manual data entry & updates.

    ConnectWise CPQ, formerly ConnectWise Sell, is a professional quote and proposal automation software for IT solution providers. ConnectWise CPQ offers a wide range of tools that enables IT solution providers to save time, quote more, and win big. Top features include professional quote or proposal templates, product catalog and sourcing, workflow automation, sales reporting, and integrations with best-in-breed solutions like Cisco, Dell, HP, and Salesforce.
  • Manage your IT department more effectively Icon
    Manage your IT department more effectively

    Streamline your business from end to end with ConnectWise PSA

    ConnectWise PSA (formerly Manage) allows you to stop working in separate systems, and helps you build a more profitable business. No more duplicate data entries, inefficient employees, manual invoices, and the inability to accurately track client service issues. Get a behind the scenes look into the award-winning PSA that automates processes for each area of business: sales, help desk, support, finance, and HR.
  • 1
    llama2-webui

    llama2-webui

    Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere

    Running Llama 2 with gradio web UI on GPU or CPU from anywhere (Linux/Windows/Mac).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    luma.gl

    luma.gl

    High-performance Toolkit for WebGL-based data visualization

    luma.gl is a GPU toolkit for the Web-focused primarily on data visualization use cases. luma.gl aims to provide support for GPU programmers that need to work directly with shaders and want a low abstraction API that remains conceptually close to the WebGPU and WebGL APIs. Unlike other common WebGL APIs, the developer can choose to use the parts of luma.gl that support their use case and leave the others behind. While generic enough to be used for general 3D rendering, luma.gl's mandate...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    CV-CUDA

    CV-CUDA

    CV-CUDA™ is an open-source, GPU accelerated library

    CV-CUDA is an open-source project that enables building efficient cloud-scale Artificial Intelligence (AI) imaging and computer vision (CV) applications. It uses graphics processing unit (GPU) acceleration to help developers build highly efficient pre- and post-processing pipelines. CV-CUDA originated as a collaborative effort between NVIDIA and ByteDance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    GameMode

    GameMode

    Optimise Linux system performance on demand

    GameMode is a daemon/lib combo for Linux that allows games to request a set of optimizations be temporarily applied to the host OS and/or a game process. GameMode was designed primarily as a stop-gap solution to problems with the Intel and AMD CPU power save or on-demand governors but is now host to a range of optimization features and configurations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gain insights and build data-powered applications Icon
    Gain insights and build data-powered applications

    Your unified business intelligence platform. Self-service. Governed. Embedded.

    Chat with your business data with Looker. More than just a modern business intelligence platform, you can turn to Looker for self-service or governed BI, build your own custom applications with trusted metrics, or even bring Looker modeling to your existing BI environment.
  • 5
    KernelAbstractions.jl

    KernelAbstractions.jl

    Heterogeneous programming in Julia

    KernelAbstractions (KA) is a package that enables you to write GPU-like kernels targetting different execution backends. KA is intended to be a minimal and performant library that explores ways to write heterogeneous code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    ParallelStencil.jl

    ParallelStencil.jl

    Package for writing high-level code for parallel stencil computations

    ParallelStencil empowers domain scientists to write architecture-agnostic high-level code for parallel high-performance stencil computations on GPUs and CPUs. Performance similar to CUDA C / HIP can be achieved, which is typically a large improvement over the performance reached when using only CUDA.jl or AMDGPU.jl GPU Array programming. For example, a 2-D shallow ice solver presented at JuliaCon 2020 [1] achieved a nearly 20 times better performance than a corresponding GPU Array programming...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    NVIDIA Merlin

    NVIDIA Merlin

    Library providing end-to-end GPU-accelerated recommender systems

    ... on the NVIDIA developer website. Transform data (ETL) for preprocessing and engineering features. Accelerate your existing training pipelines in TensorFlow, PyTorch, or FastAI by leveraging optimized, custom-built data loaders. Scale large deep learning recommender models by distributing large embedding tables that exceed available GPU and CPU memory. Deploy data transformations and trained models to production with only a few lines of code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    mactop

    mactop

    Apple Silicon Monitor Top written in pure Golang

    mactop is a terminal-based monitoring tool "top" designed to display real-time metrics for Apple Silicon chips. It provides a simple and efficient way to monitor CPU and GPU usage, E-Cores and P-Cores, power consumption, and other system metrics directly from your terminal.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    FFCV

    FFCV

    Fast Forward Computer Vision (and other ML workloads!)

    ffcv is a drop-in data loading system that dramatically increases data throughput in model training. From gridding to benchmarking to fast research iteration, there are many reasons to want faster model training. Below we present premade codebases for training on ImageNet and CIFAR, including both (a) extensible codebases and (b) numerous premade training configurations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • RMM Software | Remote Monitoring Platform and Tools Icon
    RMM Software | Remote Monitoring Platform and Tools

    Best-in-class automation, scalability, and single-pane IT management.

    Don’t settle when it comes to managing your clients’ IT infrastructure. Exceed their expectations with ConnectWise RMM, our MSP RMM software that provides proactive tools and NOC services—regardless of device environment. With the number of new vulnerabilities rising each year, smart patching procedures have never been more important. We automatically test and deploy patches when they are viable and restrict patches that are harmful. Get better protection for clients while you spend less time managing endpoints and more time growing your business. It’s tough to locate, afford, and retain quality talent. In fact, 81% of IT leaders say it’s hard to find the recruits they need. Add ConnectWise RMM, NOC services and get the expertise and problem resolution you need to become the advisor your clients demand—without adding headcount.
  • 10
    higgsfield

    higgsfield

    Fault-tolerant, highly scalable GPU orchestration

    Higgsfield is an open-source, fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters, such as Large Language Models (LLMs).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    pomegranate

    pomegranate

    Fast, flexible and easy to use probabilistic modelling in Python

    pomegranate is a library for probabilistic modeling defined by its modular implementation and treatment of all models as the probability distributions they are. The modular implementation allows one to easily drop normal distributions into a mixture model to create a Gaussian mixture model just as easily as dropping a gamma and a Poisson distribution into a mixture model to create a heterogeneous mixture. But that's not all! Because each model is treated as a probability distribution,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    cuML

    cuML

    RAPIDS Machine Learning Library

    cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn. For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    101-0250-00

    101-0250-00

    ETH course - Solving PDEs in parallel on GPUs

    This course aims to cover state-of-the-art methods in modern parallel Graphical Processing Unit (GPU) computing, supercomputing and code development with applications to natural sciences and engineering.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    LLVM.jl

    LLVM.jl

    Julia wrapper for the LLVM C API

    A Julia wrapper for the LLVM C API. The LLVM.jl package is a Julia wrapper for the LLVM C API, and can be used to work with the LLVM compiler framework from Julia. You can use the package to work with LLVM code generated by Julia, to interoperate with the Julia compiler, or to create your own compiler. It is heavily used by the different GPU compilers for the Julia programming language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    FLoops.jl

    FLoops.jl

    Fast sequential, threaded, and distributed for-loops for Julia

    Fast sequential, threaded, and distributed for-loops for Julia, fold for humans.FLoops.jl provides a macro @floop. It can be used to generate a fast generic sequential and parallel iteration over complex collections. Furthermore, the loop written in @floop can be executed with any compatible executors. See FoldsThreads.jl for various thread-based executors that are optimized for different kinds of loops. FoldsCUDA.jl provides an executor for GPU. FLoops.jl also provides a simple distributed...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Images.jl

    Images.jl

    An image library for Julia

    JuliaImages (source code) hosts the major Julia packages for image processing. Julia is well-suited to image processing because it is a modern and elegant high-level language that is a pleasure to use, while also allowing you to write "inner loops" that compile to efficient machine code (i.e., it is as fast as C). Julia supports multithreading and, through add-on packages, GPU processing. JuliaImages is a collection of packages specifically focused on image processing. It is not yet as complete...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    dfdx

    dfdx

    Deep learning in Rust, with shape checked tensors and neural networks

    Deep learning in Rust, with shape-checked tensors and neural networks. Ergonomics & safety focused deep learning in Rust.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    FEDML Open Source

    FEDML Open Source

    The unified and scalable ML library for large-scale training

    ... interconnected AI infrastructure layers: user-friendly MLOps, a well-managed scheduler, and high-performance ML libraries for running any AI jobs across GPU Clouds. A typical workflow is shown in the figure above. When a developer wants to run a pre-built job in Studio or Job Store, TensorOperaLaunch swiftly pairs AI jobs with the most economical GPU resources, and auto-provisions, and effortlessly runs the job, eliminating complex environment setup and management.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    CUDA.jl

    CUDA.jl

    CUDA programming in Julia

    High-performance GPU programming in a high-level language. JuliaGPU is a GitHub organization created to unify the many packages for programming GPUs in Julia. With its high-level syntax and flexible compiler, Julia is well-positioned to productively program hardware accelerators like GPUs without sacrificing performance. The latest development version of CUDA.jl requires Julia 1.8 or higher. If you are using an older version of Julia, you need to use a previous version of CUDA.jl...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    FastChat is an open platform for training, serving, and evaluating large language model-based chatbots. If you do not have enough memory, you can enable 8-bit compression by adding --load-8bit to the commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/T4/V100(16GB) GPU. In addition to that, you can add --cpu-offloading...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    PyTorch Implementation of SDE Solvers

    PyTorch Implementation of SDE Solvers

    Differentiable SDE solvers with GPU support and efficient sensitivity

    This library provides stochastic differential equation (SDE) solvers with GPU support and efficient backpropagation. examples/demo.ipynb gives a short guide on how to solve SDEs, including subtle points such as fixing the randomness in the solver and the choice of noise types. examples/latent_sde.py learns a latent stochastic differential equation, as in Section 5 of [1]. The example fits an SDE to data, whilst regularizing it to be like an Ornstein-Uhlenbeck prior process. The model can...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    SSD in PyTorch 1.0

    SSD in PyTorch 1.0

    High quality, fast, modular reference implementation of SSD in PyTorch

    This repository implements SSD (Single Shot MultiBox Detector). The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for research based on SSD. Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly. Add your own modules without pain. We abstract backbone, Detector, BoxHead, BoxPredictor, etc. You can...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    DALI

    DALI

    A GPU-accelerated library containing highly optimized building blocks

    ..., cropping, resizing, and many other augmentations. These data processing pipelines, which are currently executed on the CPU, have become a bottleneck, limiting the performance and scalability of training and inference. DALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the GPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput of the input pipeline.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Scalene

    Scalene

    High-performance CPU, GPU, and memory profiler for Python

    Scalene is a high-performance CPU, GPU and memory profiler for Python that does a number of things that other Python profilers do not and cannot do. It runs orders of magnitude faster than other profilers while delivering far more detailed information. Once Scalene has profiled your program, it will launch a web browser with an interactive user interface (all processing is done locally). Hover over bars to see breakdowns of CPU and memory consumption, and click on underlined column headers...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Faiss

    Faiss

    Library for efficient similarity search and clustering dense vectors

    Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed by Facebook AI Research. Faiss contains several methods for similarity search. It assumes...
    Downloads: 0 This Week
    Last Update:
    See Project