Showing 761 open source projects for "linux cpu"

View related business solutions
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Bright Data - All in One Platform for Proxies and Web Scraping Icon
    Bright Data - All in One Platform for Proxies and Web Scraping

    Say goodbye to blocks, restrictions, and CAPTCHAs

    Bright Data offers the highest quality proxies with automated session management, IP rotation, and advanced web unlocking technology. Enjoy reliable, fast performance with easy integration, a user-friendly dashboard, and enterprise-grade scaling. Powered by ethically-sourced residential IPs for seamless web scraping.
    Get Started
  • 1
    Scalene

    Scalene

    High-performance CPU, GPU, and memory profiler for Python

    Scalene is a high-performance CPU, GPU and memory profiler for Python that does a number of things that other Python profilers do not and cannot do. It runs orders of magnitude faster than other profilers while delivering far more detailed information. Once Scalene has profiled your program, it will launch a web browser with an interactive user interface (all processing is done locally). Hover over bars to see breakdowns of CPU and memory consumption, and click on underlined column headers...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    gops

    gops

    A tool to list and diagnose Go processes currently running

    ... the target binary as the same user that runs gops binary. To use gops in a remote mode you need to know target's agent address. In Local mode use process's PID as a target; in Remote mode target is a host:port combination. gops supports CPU and heap pprof profiles. After reading either heap or CPU profile, it shells out to the go tool pprof and let you interactively examine the profiles.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    TensorLy

    TensorLy

    Tensor Learning in Python

    TensorLy is a Python library that aims at making tensor learning simple and accessible. It allows to easily perform tensor decomposition, tensor learning and tensor algebra. Its backend system allows to seamlessly perform computation with NumPy, PyTorch, JAX, TensorFlow, CuPy or Paddle, and run methods at scale on CPU or GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    MegEngine

    MegEngine

    Easy-to-use deep learning framework with 3 key features

    .... Gain the lowest memory usage when inferencing a model by leveraging our unique pushdown memory planner. NOTE: MegEngine now supports Python installation on Linux-64bit/Windows-64bit/MacOS(CPU-Only)-10.14+/Android 7+(CPU-Only) platforms with Python from 3.5 to 3.8. On Windows 10 you can either install the Linux distribution through Windows Subsystem for Linux (WSL) or install the Windows distribution directly. Many other platforms are supported for inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Free CRM Software With Something for Everyone Icon
    Free CRM Software With Something for Everyone

    216,000+ customers in over 135 countries grow their businesses with HubSpot

    Think CRM software is just about contact management? Think again. HubSpot CRM has free tools for everyone on your team, and it’s 100% free. Here’s how our free CRM solution makes your job easier.
    Get free CRM
  • 5
    CppServer

    CppServer

    Fast and low latency asynchronous socket server & client C++ library

    Ultra-fast and low latency asynchronous socket server & client C++ library with support TCP, SSL, UDP, HTTP, HTTPS, WebSocket protocols, and 10K connections problem solution. Cross platform (Linux, MacOS, Windows) Asynchronous communication. Supported CPU scalability designs: IO service per thread, thread pool. Supported transport protocols: TCP, SSL, UDP, UDP multicast. Supported Web protocols: HTTP, HTTPS, WebSocket, WebSocket secure. Supported Swagger OpenAPI iterative documentation...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    frugally-deep

    frugally-deep

    A lightweight header-only library for using Keras (TensorFlow) models

    Use Keras models in C++ with ease. A lightweight header-only library for using Keras (TensorFlow) models in C++. Works out-of-the-box also when compiled into a 32-bit executable. (Of course, 64 bit is fine too.) Avoids temporarily allocating (potentially large chunks of) additional RAM during convolutions (by not materializing the im2col input matrix). Utterly ignores even the most powerful GPU in your system and uses only one CPU core per prediction. Quite fast on one CPU core, and you can run...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    ImplicitGlobalGrid.jl

    ImplicitGlobalGrid.jl

    Distributed parallelization of stencil-based GPU and CPU applications

    ImplicitGlobalGrid is an outcome of a collaboration of the Swiss National Supercomputing Centre, ETH Zurich (Dr. Samuel Omlin) with Stanford University (Dr. Ludovic Räss) and the Swiss Geocomputing Centre (Prof. Yuri Podladchikov). It renders the distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid almost trivial and enables close to ideal weak scaling of real-world applications on thousands of GPUs [1, 2, 3]. ImplicitGlobalGrid relies on the Julia...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Mosec

    Mosec

    A high-performance ML model serving framework, offers dynamic batching

    Mosec is a high-performance and flexible model-serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which...
    Downloads: 0 This Week
    Last Update:
    See Project
  • The next chapter in business mental wellness Icon
    The next chapter in business mental wellness

    Entrust your employee well-being to Calmerry's nationwide network of licensed mental health professionals.

    Calmerry is beneficial for businesses of all sizes, particularly those in high-stress industries, organizations with remote teams, and HR departments seeking to improve employee well-being and productivity
    Learn More
  • 10
    NVIDIA Merlin

    NVIDIA Merlin

    Library providing end-to-end GPU-accelerated recommender systems

    ... on the NVIDIA developer website. Transform data (ETL) for preprocessing and engineering features. Accelerate your existing training pipelines in TensorFlow, PyTorch, or FastAI by leveraging optimized, custom-built data loaders. Scale large deep learning recommender models by distributing large embedding tables that exceed available GPU and CPU memory. Deploy data transformations and trained models to production with only a few lines of code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    dperf

    dperf

    DPDK based 100Gbps network performance and load testing software

    ... of network package processing capability for NIC and CPU. Can be used as a high-performance HTTP server or client for load testing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    AI Upscaler for Blender

    AI Upscaler for Blender

    AI Upscaler for Blender using Real-ESRGAN

    ... on the CPU. Blender renders a low-resolution image. The Real-ESRGAN Upscaler upscales the low-resolution image to a higher-resolution image. Real-ESRGAN is a deep learning upscaler that uses neural networks to achieve excellent results by adding in detail when it upscales.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    EaseProbe

    EaseProbe

    A simple, standalone, and lightweight tool

    ... command on a remote host and check the CPU, Memory, and Disk usage. MySQL. Connect to a MySQL server and run the SHOW STATUS SQL. Redis. Connect to a Redis server and run the PING command. Memcache. Connect to a Memcache server and run the version command or validate a given key/value pair. MongoDB. Connect to a MongoDB server and perform a ping. Kafka. Connect to a Kafka server and perform a list of all topics.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    ServiceTalk

    ServiceTalk

    A networking framework that evolves with your application

    ... knowledge of EventLoop threading model. Executing CPU-intensive or "blocking" code requires manual thread hops. Subtle out-of-order execution of tasks when code executes both on and off the EventLoop thread. APIs are not tailored towards common application use cases (e.g. request/response, RPC, etc..) The asynchronous programming paradigm presents a barrier to entry in scenarios where vertical scalability is not a primary concern.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    SSD in PyTorch 1.0

    SSD in PyTorch 1.0

    High quality, fast, modular reference implementation of SSD in PyTorch

    This repository implements SSD (Single Shot MultiBox Detector). The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for research based on SSD. Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly. Add your own modules without pain. We abstract backbone, Detector, BoxHead, BoxPredictor, etc. You can...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DeepDetect

    DeepDetect

    Deep Learning API and Server in C++14 support for Caffe, PyTorch

    ... of image tagging, object detection, segmentation, OCR, Audio, Video, Text classification, CSV for tabular data and time series. Neural network templates for the most effective architectures for GPU, CPU, and Embedded devices. Training in a few hours and with small data thanks to 25+ pre-trained models. Full Open Source, with an ecosystem of tools (API clients, video, annotation, ...) Fast Server written in pure C++, a single codebase for Cloud, Desktop & Embedded.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Skytable

    Skytable

    Skytable is a fast, secure and reliable realtime NoSQL database

    ... to exploit all CPU cores which helps lower your TCO. Written in Rust with expert analyzed unsafe code for memory safety and TLS for encrypted connections. Have 1MB memory? That's all Skytable needs. With no platform-specific dependencies, Skytable can virtually run on anything that has an OS. Features like keyspaces, tables, data types, authn+authz, snapshots and more are ready for you to use while we're working on several new data models and features.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Netty-socketio

    Netty-socketio

    Socket.IO server implemented on Java

    ...-featured Java Profiler. YourKit, LLC is the creator of innovative and intelligent tools for profiling Java and .NET applications. CentOS, 1 CPU, 4GB RAM runned on VM, CPU 10%, Memory 15%, 6000 xhr-long polling sessions or 15000 websockets sessions, 4000 messages per second.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20

    Halide

    A language for fast, portable data-parallel computation

    Halide is a programming language for fast, portable data-parallel computation. It was designed to make writing high-performance image and array processing code much easier on modern machines. It works on all major operating systems and with several CPU architectures (X86, ARM, MIPS, Hexagon, PowerPC) and GPU Compute APIs (CUDA, OpenCL, OpenGL, among others). It isn't a standalone programming language however; rather it is embedded in C++ which means that you write C++ code, building...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    barco

    barco

    Linux containers from scratch in C

    barco is a project I worked on to learn more about Linux containers and the Linux kernel, based on other guides on the internet. Linux containers are made up by a set of Linux kernel features. namespaces: are used to group kernel objects into different sets that can be accessed by specific process trees. There are different types of namespaces, for example,the PID namespace is used to isolate the process tree, while the network namespace is used to isolate the network stack.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    CrossDB

    CrossDB

    Ultra High-performance Lightweight Embedded and Server OLTP RDBMS

    CrossDB is a ultra high-performance, lightweight embedded and server OLTP RDBMS. It is designed for high-performance scenarios where the main memory can hold the entire database.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    TorchQuantum

    TorchQuantum

    A PyTorch-based framework for Quantum Classical Simulation

    A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers. Researchers on quantum algorithm design, parameterized quantum circuit training, quantum optimal control, quantum machine learning, and quantum neural networks. Dynamic computation graph, automatic gradient computation, fast GPU support, batch model terrorized processing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    KubeAI

    KubeAI

    Private Open AI on Kubernetes

    Get inferencing running on Kubernetes: LLMs, Embeddings, Speech-to-Text. KubeAI serves an OpenAI compatible HTTP API. Admins can configure ML models by using the Model Kubernetes Custom Resources. KubeAI can be thought of as a Model Operator (See Operator Pattern) that manages vLLM and Ollama servers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    td

    td

    Telegram client, in Go. (MTProto API)

    Telegram MTProto API client in Go for users and bots.
    Downloads: 0 This Week
    Last Update:
    See Project