Showing 16 open source projects for "fast performance"

View related business solutions
  • Vibes don’t ship, Retool does Icon
    Vibes don’t ship, Retool does

    Start from a prompt and build production-ready apps on your data—with security, permissions, and compliance built in.

    Vibe coding tools create cool demos, but Retool helps you build software your company can actually use. Generate internal apps that connect directly to your data—deployed in your cloud with enterprise security from day one. Build dashboards, admin panels, and workflows with granular permissions already in place. Stop prototyping and ship on a platform that actually passes security review.
    Build apps that ship
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • 1
    CatBoost

    CatBoost

    High-performance library for gradient boosting on decision trees

    CatBoost is a fast, high-performance open source library for gradient boosting on decision trees. It is a machine learning method with plenty of applications, including ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. CatBoost offers superior performance over other GBDT libraries on many datasets, and has several superb features.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 2
    BitNet

    BitNet

    Inference framework for 1-bit LLMs

    BitNet (bitnet.cpp) is a high-performance inference framework designed to optimize the execution of 1-bit large language models, making them more efficient for edge devices and local deployment. The framework offers significant speedups and energy reductions, achieving up to 6.17x faster performance on x86 CPUs and 70% energy savings, allowing the running of models such as the BitNet b1.58 100B with impressive efficiency. With support for lossless inference and enhanced processing power,...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    The llama.cpp project enables the inference of Meta's LLaMA model (and other models) in pure C/C++ without requiring a Python runtime. It is designed for efficient and fast model execution, offering easy integration for applications needing LLM-based capabilities. The repository focuses on providing a highly optimized and portable implementation for running large language models directly within C/C++ environments.
    Downloads: 122 This Week
    Last Update:
    See Project
  • 4
    gemma.cpp

    gemma.cpp

    lightweight, standalone C++ inference engine for Google's Gemma models

    Gemma.cpp is a C++ implementation for running inference with Gemma models efficiently on CPUs and GPUs. Developed by Google, it allows running large language models (LLMs) like Gemma with minimal hardware, focusing on optimized performance and low latency. Gemma.cpp is intended for developers seeking to deploy LLMs in production environments without needing massive computational resources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    CTranslate2 is a C++ and Python library for efficient inference with Transformer models. The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Vespa

    Vespa

    The open big data serving engine

    Make AI-driven decisions using your data, in real-time. At any scale, with unbeatable performance. Vespa is a full-featured text search engine and supports both regular text search and fast approximate vector search (ANN). This makes it easy to create high-performing search applications at any scale, whether you want to use traditional techniques or a modern vector-based approach. You can even combine both approaches efficiently in the same query, something no other engine can do. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Compute Library

    Compute Library

    The Compute Library is a set of computer vision and machine learning

    The Compute Library is a set of computer vision and machine learning functions optimized for both Arm CPUs and GPUs using SIMD technologies. The library provides superior performance to other open-source alternatives and immediate support for new Arm® technologies e.g. SVE2.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    ROOT

    ROOT

    Analyzing, storing and visualizing big data, scientifically

    ROOT is a unified software package for the storage, processing, and analysis of scientific data: from its acquisition to the final visualization in the form of highly customizable, publication-ready plots. It is reliable, performant and well supported, easy to use and obtain, and strives to maximize the quantity and impact of scientific results obtained per unit cost, both of human effort and computing resources. ROOT provides a very efficient storage system for data models, that...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    MNN

    MNN

    MNN is a blazing fast, lightweight deep learning framework

    MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models, and has industry leading performance for inference and training on-device. At present, MNN has been integrated in more than 20 apps of Alibaba Inc, such as Taobao, Tmall, Youku, Dingtalk, Xianyu and etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity...
    Downloads: 16 This Week
    Last Update:
    See Project
  • Cloud data warehouse to power your data-driven innovation Icon
    Cloud data warehouse to power your data-driven innovation

    BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.

    BigQuery Studio provides a single, unified interface for all data practitioners of various coding skills to simplify analytics workflows from data ingestion and preparation to data exploration and visualization to ML model creation and use. It also allows you to use simple SQL to access Vertex AI foundational models directly inside BigQuery for text processing tasks, such as sentiment analysis, entity extraction, and many more without having to deal with specialized models.
    Try for free
  • 10
    EnvPool

    EnvPool

    C++-based high-performance parallel environment execution engine

    EnvPool is a fast, asynchronous, and parallel RL environment library designed for scaling reinforcement learning experiments. Developed by SAIL at Singapore, it leverages C++ backend and Python frontend for extremely high-speed environment interaction, supporting thousands of environments running in parallel on a single machine. It's compatible with Gymnasium API and RLlib, making it suitable for scalable training pipelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    MLPACK is a C++ machine learning library with emphasis on scalability, speed, and ease-of-use. Its aim is to make machine learning possible for novice users by means of a simple, consistent API, while simultaneously exploiting C++ language features to provide maximum performance and flexibility for expert users. * More info + downloads: https://mlpack.org * Git repo: https://github.com/mlpack/mlpack
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Coqui STT

    Coqui STT

    The deep learning toolkit for speech-to-text

    Coqui STT is a fast, open-source, multi-platform, deep-learning toolkit for training and deploying speech-to-text models. Coqui STT is battle-tested in both production and research. Multiple possible transcripts, each with an associated confidence score. Experience the immediacy of script-to-performance. With Coqui text-to-speech, production times go from months to minutes.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 13
    Flashlight library

    Flashlight library

    A C++ standalone library for machine learning

    Flashlight is a fast, flexible machine learning library written entirely in C++ by Facebook AI Research and the creators of Torch, TensorFlow, Eigen, and Deep Speech. Native support in C++ and simple extensibility make Flashlight a powerful research framework that's hackable to its core and enables fast iteration on new experimental setups and algorithms with little unopinionated and without sacrificing performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Procgen

    Procgen

    Procedurally-Generated Game-Like Gym-Environments

    Procgen (short for Procedural Generation Benchmark) is a suite of 16 procedurally generated, game-like reinforcement learning environments designed to evaluate generalization and sample efficiency in RL agents. Unlike fixed, deterministic environments, Procgen generates new levels (layouts, obstacles, visual variation) each episode, making it impossible for an agent to simply memorize trajectories. The environments are designed to run very quickly (thousands of steps per second on a single...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    TerarkDB

    TerarkDB

    A RocksDB compatible KV storage engine with better performance

    TerarkDB is an open-source storage engine designed as a replacement for RocksDB — offering improved performance, better compression, and optimized tail-latency for key-value workloads. It aims to be drop-in compatible with existing RocksDB setups: you can migrate most RocksDB instances over to TerarkDB without rewriting your storage logic. Under the hood, TerarkDB employs advanced data structures and compression strategies to reduce I/O, memory usage, and latency variability — delivering higher throughput and more predictable performance under heavy load. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    TurboTransformers

    TurboTransformers

    Fast and user-friendly runtime for transformer inference

    TurboTransformers is a high-performance inference framework optimized for running Transformer models efficiently on CPUs and GPUs. It improves latency and throughput for NLP applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next