Showing 40 open source projects for "cpu"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • 1
    rwkv.cpp

    rwkv.cpp

    INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model

    Besides the usual FP32, it supports FP16, quantized INT4, INT5 and INT8 inference. This project is focused on CPU, but cuBLAS is also supported. RWKV is a novel large language model architecture, with the largest model in the family having 14B parameters. In contrast to Transformer with O(n^2) attention, RWKV requires only state from the previous step to calculate logits. This makes RWKV very CPU-friendly on large context lengths.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    ...The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc. The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers (INT16), and 8-bit integers (INT8). ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    DALI

    DALI

    A GPU-accelerated library containing highly optimized building blocks

    ...Deep learning applications require complex, multi-stage data processing pipelines that include loading, decoding, cropping, resizing, and many other augmentations. These data processing pipelines, which are currently executed on the CPU, have become a bottleneck, limiting the performance and scalability of training and inference. DALI addresses the problem of the CPU bottleneck by offloading data preprocessing to the GPU. Additionally, DALI relies on its own execution engine, built to maximize the throughput of the input pipeline.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    whisper.cpp

    whisper.cpp

    Port of OpenAI's Whisper model in C/C++

    whisper.cpp is a lightweight, C/C++ reimplementation of OpenAI’s Whisper automatic speech recognition (ASR) model—designed for efficient, standalone transcription without external dependencies. The entire high-level implementation of the model is contained in whisper.h and whisper.cpp. The rest of the code is part of the ggml machine learning library. The command downloads the base.en model converted to custom ggml format and runs the inference on all .wav samples in the folder samples....
    Downloads: 550 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    ChatLLM.cpp

    ChatLLM.cpp

    Pure C++ implementation of several models for real-time chatting

    chatllm.cpp is a pure C++ implementation designed for real-time chatting with Large Language Models (LLMs) on personal computers, supporting both CPU and GPU executions. It enables users to run various LLMs ranging from less than 1 billion to over 300 billion parameters, facilitating responsive and efficient conversational AI experiences without relying on external servers.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    Simd Library

    Simd Library

    C++ image processing and machine learning library with using of SIMD

    ...It provides many useful high-performance algorithms for image processing such as pixel format conversion, image scaling and filtration, extraction of statistical information from images, motion detection, object detection and classification, neural networks. The algorithms are optimized with using of different SIMD CPU extensions. In particular, the library supports the following CPU extensions: SSE, AVX, AVX-512, and AMX for x86/x64, and NEON for ARM. The Simd Library has C API and also contains useful C++ classes and functions to facilitate access to C API. The library supports dynamic and static linking, 32-bit and 64-bit Windows and Linux, MSVS, G++ and Clang compilers, MSVS projects, and CMake build systems.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 7
    PyTorch/XLA

    PyTorch/XLA

    Enabling PyTorch on Google TPU

    PyTorch/XLA is a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs. You can try it right now, for free, on a single Cloud TPU with Google Colab, and use it in production and on Cloud TPU Pods with Google Cloud. Take a look at one of our Colab notebooks to quickly try different PyTorch networks running on Cloud TPUs and learn how to use Cloud TPUs as PyTorch devices. We are also introducing new TPU VMs for more transparent...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    MNN

    MNN

    MNN is a blazing fast, lightweight deep learning framework

    ...Android platform, core so size is about 400KB, OpenCL so is about 400KB, Vulkan so is about 400KB. Supports hybrid computing on multiple devices. Currently supports CPU and GPU.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 9
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    ...It enables developers to build real-time vision applications ranging from facial recognition to object tracking. OpenCV supports a wide range of programming languages including C++, Python, and Java, and is optimized for both CPU and GPU operations.
    Downloads: 18 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 10
    MegEngine

    MegEngine

    Easy-to-use deep learning framework with 3 key features

    ...Gain the lowest memory usage when inferencing a model by leveraging our unique pushdown memory planner. NOTE: MegEngine now supports Python installation on Linux-64bit/Windows-64bit/MacOS(CPU-Only)-10.14+/Android 7+(CPU-Only) platforms with Python from 3.5 to 3.8. On Windows 10 you can either install the Linux distribution through Windows Subsystem for Linux (WSL) or install the Windows distribution directly. Many other platforms are supported for inference.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    ChatGLM.cpp

    ChatGLM.cpp

    C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)

    ChatGLM.cpp is a C++ implementation of the ChatGLM-6B model, enabling efficient local inference without requiring a Python environment. It is optimized for running on consumer hardware.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    DeepDetect

    DeepDetect

    Deep Learning API and Server in C++14 support for Caffe, PyTorch

    ...Ready for applications of image tagging, object detection, segmentation, OCR, Audio, Video, Text classification, CSV for tabular data and time series. Neural network templates for the most effective architectures for GPU, CPU, and Embedded devices. Training in a few hours and with small data thanks to 25+ pre-trained models. Full Open Source, with an ecosystem of tools (API clients, video, annotation, ...) Fast Server written in pure C++, a single codebase for Cloud, Desktop & Embedded.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    ...It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded, or automotive product platforms. TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. ...
    Downloads: 22 This Week
    Last Update:
    See Project
  • 14
    AlphaFold 3

    AlphaFold 3

    AlphaFold 3 inference pipeline

    AlphaFold 3, developed by Google DeepMind, is an advanced deep learning system for predicting biomolecular structures and interactions with exceptional accuracy. This repository provides the complete inference pipeline for running AlphaFold 3, though access to the model parameters is restricted and must be obtained directly from Google under specific terms of use. The system is designed for scientific research applications in structural biology, biochemistry, and bioinformatics, enabling...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 15
    ncnn

    ncnn

    High-performance neural network inference framework for mobile

    ...It brings artificial intelligence right at your fingertips with no third-party dependencies, and speeds faster than all other known open source frameworks for mobile phone cpu. ncnn allows developers to easily deploy deep learning algorithm models to the mobile platform and create intelligent APPs. It is cross-platform and supports most commonly used CNN networks, including Classical CNN (VGG AlexNet GoogleNet Inception), Face Detection (MTCNN RetinaFace), Segmentation (FCN PSPNet UNet YOLACT), and more. ncnn is currently being used in a number of Tencent applications, namely: QQ, Qzone, WeChat, and Pitu.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 16
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    ...Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 17
    gemma.cpp

    gemma.cpp

    lightweight, standalone C++ inference engine for Google's Gemma models

    Gemma.cpp is a C++ implementation for running inference with Gemma models efficiently on CPUs and GPUs. Developed by Google, it allows running large language models (LLMs) like Gemma with minimal hardware, focusing on optimized performance and low latency. Gemma.cpp is intended for developers seeking to deploy LLMs in production environments without needing massive computational resources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    cuML

    cuML

    RAPIDS Machine Learning Library

    cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn. For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the cuML Benchmarks Notebook.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Compute Library

    Compute Library

    The Compute Library is a set of computer vision and machine learning

    The Compute Library is a set of computer vision and machine learning functions optimized for both Arm CPUs and GPUs using SIMD technologies. The library provides superior performance to other open-source alternatives and immediate support for new Arm® technologies e.g. SVE2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    pytorch-cpp

    pytorch-cpp

    C++ Implementation of PyTorch Tutorials for Everyone

    ...Requires installed python3 with PyTorch and torch-vision. You can choose to only build tutorials in one of the categories basics, intermediate, advanced or popular. You can build and run the tutorials (on CPU) in a Docker container using the provided Dockerfile and docker-compose.yml files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ArrayFire

    ArrayFire

    ArrayFire, a general purpose GPU library

    ...Together we can fulfill The ArrayFire Mission under an excellent Code of Conduct that promotes a respectful and friendly building experience. Rigorous benchmarks and tests ensuring top performance and numerical accuracy. Cross-platform compatibility with support for CUDA, OpenCL, and native CPU on Windows, Mac, and Linux. Built-in visualization functions through Forge.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    KoboldCpp

    KoboldCpp

    Run GGUF models easily with a UI or API. One File. Zero Install.

    KoboldCpp is an easy-to-use AI text-generation software for GGML and GGUF models, inspired by the original KoboldAI. It's a single self-contained distributable that builds off llama.cpp and adds many additional powerful features.
    Downloads: 426 This Week
    Last Update:
    See Project
  • 23
    Bandicoot

    Bandicoot

    fast C++ library for GPU linear algebra & scientific computing

    * Fast GPU linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use * Provides high-level syntax and functionality deliberately similar to Matlab * Provides an API that is aiming to be compatible with Armadillo for easy transition between CPU and GPU linear algebra code * Useful for algorithm development directly in C++, or quick conversion of research code into production environments * Distributed under the permissive Apache 2.0 license, useful for both open-source and proprietary (closed-source) software * Can be used for machine learning, pattern recognition, computer vision, signal processing, bioinformatics, statistics, finance, etc * Downloads: http://coot.sourceforge.io/download.html * Documentation: http://coot.sourceforge.io/docs.html * Bug reports: http://coot.sourceforge.io/faq.html * Git repo: https://gitlab.com/conradsnicta/bandicoot-code
    Downloads: 6 This Week
    Last Update:
    See Project
  • 24
    Flashlight library

    Flashlight library

    A C++ standalone library for machine learning

    ...Flashlight can be broken down into several components as described above. Each component can be incrementally built by specifying the correct build options. Flashlight is most-easily built and installed with vcpkg. Both the CUDA and CPU backends are supported with vcpkg. For either backend, first, install Intel MKL. Flashlight app binaries are also built for the selected features and are installed into the vcpkg install tree's tools directory.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next