Search Results for "cpu gpu monitor" - Page 3

Showing 403 open source projects for "cpu gpu monitor"

View related business solutions
  • Enterprise and Small Business CRM Solution | Clear C2 C2CRM Icon
    Enterprise and Small Business CRM Solution | Clear C2 C2CRM

    Voted Best CRM System with Top Ranked Customer Support. CRM Management includes Sales, Marketing, Relationship Management, and Help Desk.

    C2CRM consists of four modules that integrate to provide a comprehensive CRM solution: Relationship Management, Sales Automation, Marketing Automation, and Customer Service. Only buy what each user needs.
    Learn More
  • Software Testing Platform | Testeum Icon
    Software Testing Platform | Testeum

    Testeum is a Software Testing & User Test platform

    Tired of bugs and poor UX going unnoticed despite thorough internal testing? Testeum is the SaaS crowdtesting platform that connects mobile and web app creators with carefully selected testers based on your criteria.
    Learn More
  • 1
    DocTR

    DocTR

    Library for OCR-related tasks powered by Deep Learning

    DocTR provides an easy and powerful way to extract valuable information from your documents. Seemlessly process documents for Natural Language Understanding tasks: we provide OCR predictors to parse textual information (localize and identify each word) from your documents. Robust 2-stage (detection + recognition) OCR predictors with pretrained parameters. User-friendly, 3 lines of code to load a document and extract text with a predictor. State-of-the-art performances on public document...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    DGL

    DGL

    Python package built to ease deep learning on graph

    ... want to make it easy to implement graph neural networks model family. We also want to make the combination of graph based modules and tensor based modules (PyTorch or MXNet) as smooth as possible. DGL provides a powerful graph object that can reside on either CPU or GPU. It bundles structural data as well as features for a better control. We provide a variety of functions for computing with graph objects including efficient and customizable message passing primitives for Graph Neural Networks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    NIM Pools Hub Miner

    NIM Pools Hub Miner

    CPU & GPU Nimiq Miner for NVIDIA & AMD

    Nimiq Miner with a focus on user experience and ease of use.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    OpenCost

    OpenCost

    Cost monitoring for Kubernetes workloads and cloud costs

    ... for on-prem Kubernetes clusters using custom pricing. Monitor costs outside the Kubernetes cluster from the cloud provider, resources like object storage, databases, and other managed services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Accounting Software for Small Businesses | Xero Icon
    Accounting Software for Small Businesses | Xero

    Save 90% for 6 months on Xero's award-winning accounting and online bookkeeping platform for businesses of all sizes and stages of growth.

    Xero offers a robust ecosystem of connected apps and integrations with banks and financial institutions, enabling small businesses to access a wide range of solutions within Xero's open platform to streamline operations and manage finances. Additionally, accounting and bookkeeping firms benefit from efficient compliance tools, advanced practice management software, and a cloud-based unified accounting ledger for all clients, centralized in one place.
    Get 90% off for 6 months
  • 5
    Genv

    Genv

    GPU environment management and cluster orchestration

    Genv is an open-source environment and cluster management system for GPUs. Genv lets you easily control, configure, monitor and enforce the GPU resources that you are using in a GPU machine or cluster. It is intended to ease up the process of GPU allocation for data scientists without code changes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    llama2-webui

    llama2-webui

    Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere

    Running Llama 2 with gradio web UI on GPU or CPU from anywhere (Linux/Windows/Mac).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    ImplicitGlobalGrid.jl

    ImplicitGlobalGrid.jl

    Distributed parallelization of stencil-based GPU and CPU applications

    ImplicitGlobalGrid is an outcome of a collaboration of the Swiss National Supercomputing Centre, ETH Zurich (Dr. Samuel Omlin) with Stanford University (Dr. Ludovic Räss) and the Swiss Geocomputing Centre (Prof. Yuri Podladchikov). It renders the distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid almost trivial and enables close to ideal weak scaling of real-world applications on thousands of GPUs [1, 2, 3]. ImplicitGlobalGrid relies on the Julia...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Zenith

    Zenith

    Sort of like top or htop but with zoom-able charts, CPU, GPU

    ... is unable to directly expose NVIDIA GPU. Unlike the runtime zenith script, the Makefile has been setup to detect only the presence of required NVIDIA libraries, so it is possible to build with NVIDIA support even when without NVIDIA GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    FastChat is an open platform for training, serving, and evaluating large language model-based chatbots. If you do not have enough memory, you can enable 8-bit compression by adding --load-8bit to the commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/T4/V100(16GB) GPU. In addition to that, you can add --cpu-offloading...
    Downloads: 0 This Week
    Last Update:
    See Project
  • JobNimbus Construction Software Icon
    JobNimbus Construction Software

    For Roofers, Remodelers, Contractors, Home Service Industry

    Track leads, jobs, and tasks from one easy to use software. You can access your information wherever you are, get everyone on the same page, and grow your business.
    Learn More
  • 10
    Core ML Tools

    Core ML Tools

    Core ML tools contain supporting tools for Core ML model conversion

    ... performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    DocArray

    DocArray

    The data structure for multimodal data

    ... science powerhouse: greatly accelerate data scientists’ work on embedding, k-NN matching, querying, visualizing, evaluating via Torch/TensorFlow/ONNX/PaddlePaddle on CPU/GPU. Data in transit: optimized for network communication, ready-to-wire at anytime with fast and compressed serialization in Protobuf, bytes, base64, JSON, CSV, DataFrame. Perfect for streaming and out-of-memory data. One-stop k-NN: Unified and consistent API for mainstream vector databases.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    TensorFlow Addons

    TensorFlow Addons

    Useful extra functionality for TensorFlow 2.x maintained by SIG-addons

    TensorFlow Addons is a repository of contributions that conform to well-established API patterns but implement new functionality not available in core TensorFlow. TensorFlow natively supports a large number of operators, layers, metrics, losses, and optimizers. However, in a fast-moving field like ML, there are many interesting new developments that cannot be integrated into core TensorFlow (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    Ray

    Ray

    A unified framework for scalable computing

    Modern workloads like deep learning and hyperparameter tuning are compute-intensive and require distributed or parallel execution. Ray makes it effortless to parallelize single machine code — go from a single CPU to multi-core, multi-GPU or multi-node with minimal code changes. Accelerate your PyTorch and Tensorflow workload with a more resource-efficient and flexible distributed execution framework powered by Ray. Accelerate your hyperparameter search workloads with Ray Tune. Find the best...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    NVIDIA Merlin

    NVIDIA Merlin

    Library providing end-to-end GPU-accelerated recommender systems

    ... on the NVIDIA developer website. Transform data (ETL) for preprocessing and engineering features. Accelerate your existing training pipelines in TensorFlow, PyTorch, or FastAI by leveraging optimized, custom-built data loaders. Scale large deep learning recommender models by distributing large embedding tables that exceed available GPU and CPU memory. Deploy data transformations and trained models to production with only a few lines of code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    DALI

    DALI

    A GPU-accelerated library containing highly optimized building blocks

    ..., cropping, resizing, and many other augmentations. These data processing pipelines, which are currently executed on the CPU, have become a bottleneck, limiting the performance and scalability of training and inference. DALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the GPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput of the input pipeline.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Scalene

    Scalene

    High-performance CPU, GPU, and memory profiler for Python

    Scalene is a high-performance CPU, GPU and memory profiler for Python that does a number of things that other Python profilers do not and cannot do. It runs orders of magnitude faster than other profilers while delivering far more detailed information. Once Scalene has profiled your program, it will launch a web browser with an interactive user interface (all processing is done locally). Hover over bars to see breakdowns of CPU and memory consumption, and click on underlined column headers...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    SSD in PyTorch 1.0

    SSD in PyTorch 1.0

    High quality, fast, modular reference implementation of SSD in PyTorch

    This repository implements SSD (Single Shot MultiBox Detector). The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for research based on SSD. Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly. Add your own modules without pain. We abstract backbone, Detector, BoxHead, BoxPredictor, etc. You can...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Ultralight

    Ultralight

    Lightweight, high-performance HTML renderer for game developers

    Use our portable, WebKit-based SDK to effortlessly display modern web content everywhere. Available for desktop apps, game consoles, TVs, embedded device displays, servers, and more. Official API for C and C++, with bindings for more. Render web-content on the GPU via Direct3D, Metal, OpenGL, or your own engine for unmatched visual performance. Render web-content on the CPU via SIMD/parallel for incredibly easy integration with any environment (including server-side!). Ultralight is engineered...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    ParallelStencil.jl

    ParallelStencil.jl

    Package for writing high-level code for parallel stencil computations

    ... implementation; in absolute terms, it reached 70% of the theoretical upper performance bound of the used Nvidia P100 GPU, as defined by the effective throughput metric, T_eff. ParallelStencil relies on the native kernel programming capabilities of CUDA.jl and AMDGPU.jl and on Base.Threads for high-performance computations on GPUs and CPUs, respectively. It is seamlessly interoperable with ImplicitGlobalGrid.jl, which renders the distributed parallelization of stencil-based GPU and CPU apps.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    API-for-Open-LLM

    API-for-Open-LLM

    Openai style api for open large language models

    API-for-Open-LLM is a lightweight API server designed for deploying and serving open large language models (LLMs), offering a simple way to integrate LLMs into applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    TensorLy

    TensorLy

    Tensor Learning in Python

    TensorLy is a Python library that aims at making tensor learning simple and accessible. It allows to easily perform tensor decomposition, tensor learning and tensor algebra. Its backend system allows to seamlessly perform computation with NumPy, PyTorch, JAX, TensorFlow, CuPy or Paddle, and run methods at scale on CPU or GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Floem

    Floem

    A native Rust UI library with fine-grained reactivity

    Floem is a cross-platform GUI framework for Rust. It aims to be extremely performant while providing world-class developer ergonomics. Supporting both GPU and CPU rendering, Floem gives you performance that's closest to bare metal. Also primitives are provided to help the developer to write performant UI code without too much effect.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    DeepDetect

    DeepDetect

    Deep Learning API and Server in C++14 support for Caffe, PyTorch

    ... of image tagging, object detection, segmentation, OCR, Audio, Video, Text classification, CSV for tabular data and time series. Neural network templates for the most effective architectures for GPU, CPU, and Embedded devices. Training in a few hours and with small data thanks to 25+ pre-trained models. Full Open Source, with an ecosystem of tools (API clients, video, annotation, ...) Fast Server written in pure C++, a single codebase for Cloud, Desktop & Embedded.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25

    Halide

    A language for fast, portable data-parallel computation

    Halide is a programming language for fast, portable data-parallel computation. It was designed to make writing high-performance image and array processing code much easier on modern machines. It works on all major operating systems and with several CPU architectures (X86, ARM, MIPS, Hexagon, PowerPC) and GPU Compute APIs (CUDA, OpenCL, OpenGL, among others). It isn't a standalone programming language however; rather it is embedded in C++ which means that you write C++ code, building...
    Downloads: 0 This Week
    Last Update:
    See Project