Showing 149 open source projects for "cpu"

View related business solutions
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Bright Data - All in One Platform for Proxies and Web Scraping Icon
    Bright Data - All in One Platform for Proxies and Web Scraping

    Say goodbye to blocks, restrictions, and CAPTCHAs

    Bright Data offers the highest quality proxies with automated session management, IP rotation, and advanced web unlocking technology. Enjoy reliable, fast performance with easy integration, a user-friendly dashboard, and enterprise-grade scaling. Powered by ethically-sourced residential IPs for seamless web scraping.
    Get Started
  • 1
    Lama Cleaner

    Lama Cleaner

    Image inpainting tool powered by SOTA AI Model

    ... their work. Completely free and open-source, fully self-hosted, supports CPU & GPU. Windows 1-Click Installer, classical image inpainting algorithm powered by cv2. Multiple SOTA AI models, and various inpainting strategies. Run as a desktop application. Interactive Segmentation on any object.
    Downloads: 42 This Week
    Last Update:
    See Project
  • 2
    Roop

    Roop

    One-click face swap

    Take a video and replace the face with a face of your choice. You only need one image of the desired face. No dataset, and no training.
    Downloads: 31 This Week
    Last Update:
    See Project
  • 3
    OCRmyPDF

    OCRmyPDF

    OCRmyPDF adds an OCR text layer to scanned PDF files

    OCRmyPDF adds an optical character recognition (OCR) text layer to scanned PDF files, allowing them to be searched. PDF is the best format for storing and exchanging scanned documents. Unfortunately, PDFs can be difficult to modify. OCRmyPDF makes it easy to apply image processing and OCR (recognized, searchable text) to existing PDFs.
    Downloads: 34 This Week
    Last Update:
    See Project
  • 4
    ComfyUI

    ComfyUI

    The most powerful and modular diffusion model GUI, api and backend

    The most powerful and modular diffusion model is GUI and backend. This UI will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart-based interface. We are a team dedicated to iterating and improving ComfyUI, supporting the ComfyUI ecosystem with tools like node manager, node registry, cli, automated testing, and public documentation. Open source AI models will win in the long run against closed models and we are only at the beginning. Our core mission...
    Downloads: 30 This Week
    Last Update:
    See Project
  • Empower decisions with IBM SPSS Statistics. Icon
    Empower decisions with IBM SPSS Statistics.

    For companies that need a powerful data platform

    IBM SPSS Statistics software is used by a variety of customers to solve industry-specific business issues to drive quality decision-making. Advanced statistical procedures and visualization can provide a robust, user friendly and an integrated platform to understand your data and solve complex business and research problems
    Learn More
  • 5
    MESHROOM

    MESHROOM

    3D reconstruction software

    ... into HDR. Alignment of panorama images. Support for fisheye optics. Automatically estimate fisheye circle or manually edit it. Take advantage of motorized-head file. Easy to integrate in your Renderfarm System. Add specific rules to select the most suitable machines regarding CPU, RAM, GPU requirements of each Node.
    Downloads: 30 This Week
    Last Update:
    See Project
  • 6
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    ... accelerators. Quantized inference is significantly faster than floating point inference. For example, models that we’ve run on the Qualcomm® Hexagon™ DSP rather than on the Qualcomm® Kryo™ CPU have resulted in a 5x to 15x speedup. Plus, an 8-bit model also has a 4x smaller memory footprint relative to a 32-bit model. However, often when quantizing a machine learning model (e.g., from 32-bit floating point to an 8-bit fixed point value), the model accuracy is sacrificed.
    Downloads: 28 This Week
    Last Update:
    See Project
  • 7
    Tactical RMM

    Tactical RMM

    A remote monitoring & management tool, built with Django, Vue and Go

    .... Windows patch management. Automated checks with email/SMS alerting (cpu, disk, memory, services, scripts, event logs). Automated task runner (run scripts on a schedule). Remote software installation via chocolatey. Software and hardware inventory.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 8
    auto-cpufreq

    auto-cpufreq

    Automatic CPU speed & power optimizer for Linux

    Automatic CPU speed & power optimizer for Linux. Actively monitors laptop battery state, CPU usage, CPU temperature, and system load, ultimately allowing you to improve battery life without making any compromises.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Text Generation Web UI

    Text Generation Web UI

    A gradio web UI for running Large Language Models like LLaMA

    ... efficient text streaming. Parameter presets, 8-bit mode. Layers splitting across GPU(s), CPU, and disk. CPU mode, FlexGen, DeepSpeed ZeRO-3, API with streaming and without streaming. LLaMA model, including 4-bit GPTQ. RWKV model, LoRA (loading and training), Softprompts, and extensions.
    Downloads: 8 This Week
    Last Update:
    See Project
  • All-in-One Payroll and HR Platform Icon
    All-in-One Payroll and HR Platform

    For small and mid-sized businesses that need a comprehensive payroll and HR solution with personalized support

    We design our technology to make workforce management easier. APS offers core HR, payroll, benefits administration, attendance, recruiting, employee onboarding, and more.
    Learn More
  • 10
    Keras

    Keras

    Python-based neural networks API

    Python Deep Learning library
    Downloads: 5 This Week
    Last Update:
    See Project
  • 11
    Glances

    Glances

    An eye on your system

    Glances is an open source, cross-platform monitoring tool that aims to provide a significant amount of monitoring information through a curses or Web-based interface. Depending on the size of the user interface, this information can then dynamically adapt. Glances can work in client/server mode, and is also capable of remote monitoring. All systems statistics can be exported to files or external time/value databases. Glances gets information from your system through various libraries,...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 12
    proxy.py

    proxy.py

    Utilize all available CPU cores for accepting new client connections

    proxy.py is made with performance in mind. By default, proxy.py will try to utilize all available CPU cores to it for accepting new client connections. This is achieved by starting AcceptorPool which listens on configured server port. Then, AcceptorPool starts Acceptor processes (--num-acceptors) to accept incoming client connections. Alongside, if --threadless is enabled, ThreadlessPool is setup which starts Threadless processes (--num-workers) to handle the incoming client connections. Each...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    AI Upscaler for Blender

    AI Upscaler for Blender

    AI Upscaler for Blender using Real-ESRGAN

    ... on the CPU. Blender renders a low-resolution image. The Real-ESRGAN Upscaler upscales the low-resolution image to a higher-resolution image. Real-ESRGAN is a deep learning upscaler that uses neural networks to achieve excellent results by adding in detail when it upscales.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    KServe

    KServe

    Standardized Serverless ML Inference Platform on Kubernetes

    KServe provides a Kubernetes Custom Resource Definition for serving machine learning (ML) models on arbitrary frameworks. It aims to solve production model serving use cases by providing performant, high abstraction interfaces for common ML frameworks like Tensorflow, XGBoost, ScikitLearn, PyTorch, and ONNX. It encapsulates the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU Autoscaling, Scale to Zero, and...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    Stable Diffusion in Docker

    Stable Diffusion in Docker

    Run the Stable Diffusion releases in a Docker container

    ... a suitable GPU you can set the options --device cpu and --onnx instead. Since it uses the model, you will need to create a user access token in your Huggingface account. Save the user access token in a file called token.txt and make sure it is available when building the container. Create an image from an existing image and a text prompt. Modify an existing image with its depth map and a text prompt.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    DocTR

    DocTR

    Library for OCR-related tasks powered by Deep Learning

    DocTR provides an easy and powerful way to extract valuable information from your documents. Seemlessly process documents for Natural Language Understanding tasks: we provide OCR predictors to parse textual information (localize and identify each word) from your documents. Robust 2-stage (detection + recognition) OCR predictors with pretrained parameters. User-friendly, 3 lines of code to load a document and extract text with a predictor. State-of-the-art performances on public document...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    Triton Inference Server

    Triton Inference Server

    The Triton Inference Server provides an optimized cloud

    Triton Inference Server is an open-source inference serving software that streamlines AI inferencing. Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center, edge, and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton delivers optimized performance for many query types, including...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    TorchQuantum

    TorchQuantum

    A PyTorch-based framework for Quantum Classical Simulation

    A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers. Researchers on quantum algorithm design, parameterized quantum circuit training, quantum optimal control, quantum machine learning, and quantum neural networks. Dynamic computation graph, automatic gradient computation, fast GPU support, batch model terrorized processing.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    whisper-timestamped

    whisper-timestamped

    Multilingual Automatic Speech Recognition with word-level timestamps

    Multilingual Automatic Speech Recognition with word-level timestamps and confidence. Whisper is a set of multi-lingual, robust speech recognition models trained by OpenAI that achieve state-of-the-art results in many languages. Whisper models were trained to predict approximate timestamps on speech segments (most of the time with 1-second accuracy), but they cannot originally predict word timestamps. This repository proposes an implementation to predict word timestamps and provide a more...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    PennyLane

    PennyLane

    A cross-platform Python library for differentiable programming

    ..., PyTorch, JAX, or TensorFlow, allowing hybrid CPU-GPU-QPU computations. The same quantum circuit model can be run on different devices. Install plugins to run your computational circuits on more devices, including Strawberry Fields, Amazon Braket, Qiskit and IBM Q, Google Cirq, Rigetti Forest, and the Microsoft QDK.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend. The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    PyTorch Geometric

    PyTorch Geometric

    Geometric deep learning extension library for PyTorch

    ... of functionality of PyTorch Geometric to other packages, which needs to be additionally installed. These packages come with their own CPU and GPU kernel implementations based on C++/CUDA extensions. We do not recommend installation as root user on your system python. Please setup an Anaconda/Miniconda environment or create a Docker image. We provide pip wheels for all major OS/PyTorch/CUDA combinations.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Agent360

    Agent360

    360 monitoring agent

    360 Monitoring is a web service that monitors and displays statistics of your server performance. Agent360 is OS-agnostic software compatible with Python 3.7 and 3.8. It's been optimized to have low CPU consumption and comes with an extendable set of useful plugins.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    FastChat is an open platform for training, serving, and evaluating large language model-based chatbots. If you do not have enough memory, you can enable 8-bit compression by adding --load-8bit to the commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/T4/V100(16GB) GPU. In addition to that, you can add --cpu-offloading...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Amazon CodeGuru Profiler Python Agent

    Amazon CodeGuru Profiler Python Agent

    Amazon CodeGuru Profiler Python Agent

    Amazon CodeGuru Profiler collects runtime performance data from your live applications and provides recommendations that can help you fine-tune your application performance. Using machine learning algorithms, CodeGuru Profiler can help you find your most expensive lines of code and suggest ways you can improve efficiency and remove CPU bottlenecks. CodeGuru Profiler provides different visualizations of profiling data to help you identify what code is running on the CPU, see how much time...
    Downloads: 0 This Week
    Last Update:
    See Project