Showing 88 open source projects for "fast performance"

View related business solutions
  • Retool your internal operations Icon
    Retool your internal operations

    Generate secure, production-grade apps that connect to your business data. Not just prototypes, but tools your team can actually deploy.

    Build internal software that meets enterprise security standards without waiting on engineering resources. Retool connects to your databases, APIs, and data sources while maintaining the permissions and controls you need. Create custom dashboards, admin tools, and workflows from natural language prompts—all deployed in your cloud with security baked in. Stop duct-taping operations together, start building in Retool.
    Build an app in Retool
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • 1
    BrowserAI

    BrowserAI

    Run local LLMs like llama, deepseek, kokoro etc. inside your browser

    BrowserAI is a cutting-edge platform that allows users to run large language models (LLMs) directly in their web browser without the need for a server. It leverages WebGPU for accelerated performance and supports offline functionality, making it a highly efficient and privacy-conscious solution. The platform provides a developer-friendly SDK with pre-configured popular models, and it allows for seamless switching between MLC and Transformer engines. Additionally, it supports features such as...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    CTranslate2 is a C++ and Python library for efficient inference with Transformer models. The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    BioNeMo

    BioNeMo

    BioNeMo Framework: For building and adapting AI models

    BioNeMo is an AI-powered framework developed by NVIDIA for protein and molecular generation using deep learning models. It provides researchers and developers with tools to design, analyze, and optimize biological molecules, aiding in drug discovery and synthetic biology applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    SciSpaCy

    SciSpaCy

    A full spaCy pipeline and models for scientific/biomedical documents

    ScispaCy is a spaCy extension optimized for processing biomedical and scientific text, providing domain-specific NLP models for tasks like named entity recognition (NER) and dependency parsing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    Lance

    Lance

    Modern columnar data format for ML and LLMs implemented in Rust

    Lance is a columnar data format that is easy and fast to version, query and train on. It’s designed to be used with images, videos, 3D point clouds, audio and of course tabular data. It supports any POSIX file systems, and cloud storage like AWS S3 and Google Cloud Storage.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    Scrapling

    Scrapling

    An undetectable, powerful, flexible, high-performance Python library

    Scrapling is a Python scraping framework built for the modern web, combining high-performance fetchers with a rapid parsing engine to handle dynamic sites and anti-bot countermeasures. It emphasizes being “undetectable,” flexible, and fast, offering an approachable API for both experienced scrapers and newcomers. The library targets the full scraping pipeline: session handling, fetching, rendering when needed, parsing, and export—while keeping ergonomics front and center. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Compute Library

    Compute Library

    The Compute Library is a set of computer vision and machine learning

    The Compute Library is a set of computer vision and machine learning functions optimized for both Arm CPUs and GPUs using SIMD technologies. The library provides superior performance to other open-source alternatives and immediate support for new Arm® technologies e.g. SVE2.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    Vespa

    Vespa

    The open big data serving engine

    Make AI-driven decisions using your data, in real-time. At any scale, with unbeatable performance. Vespa is a full-featured text search engine and supports both regular text search and fast approximate vector search (ANN). This makes it easy to create high-performing search applications at any scale, whether you want to use traditional techniques or a modern vector-based approach. You can even combine both approaches efficiently in the same query, something no other engine can do. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Engram

    Engram

    A New Axis of Sparsity for Large Language Models

    Engram is a high-performance embedding and similarity search library focused on making retrieval-augmented workflows efficient, scalable, and easy to adopt by developers building search, recommendation, or semantic matching systems. It provides utilities to generate embeddings from text or other structured data, index them using efficient approximate nearest neighbor algorithms, and perform real-time similarity queries even on large corpora. Engineered with speed and memory efficiency in...
    Downloads: 5 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 10
    Tiktoken

    Tiktoken

    tiktoken is a fast BPE tokeniser for use with OpenAI's models

    tiktoken is a high-performance, tokenizer library (based on byte-pair encoding, BPE) designed for use with OpenAI’s models. It handles encoding and decoding text to token IDs efficiently, with minimal overhead. Because tokenization is a fundamental step in preparing text for models, tiktoken is optimized for speed, memory, and correctness in model contexts (e.g. matching OpenAI’s internal tokenization). The repo supports multiple encodings (e.g. “cl100k_base”) and lets users switch encoding...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Daft

    Daft

    Distributed DataFrame for Python designed for the cloud

    ...Underneath its Python API, Daft is built in blazing fast Rust code. Rust powers Daft’s vectorized execution and async I/O, allowing Daft to outperform frameworks such as Spark.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    HDBSCAN

    HDBSCAN

    A high performance implementation of HDBSCAN clustering

    HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. In practice this means that HDBSCAN returns a good clustering straight away with little or no parameter tuning -- and the primary parameter, minimum cluster...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    BentoML

    BentoML

    Unified Model Serving Framework

    BentoML simplifies ML model deployment and serves your models at a production scale. Support multiple ML frameworks natively: Tensorflow, PyTorch, XGBoost, Scikit-Learn and many more! Define custom serving pipeline with pre-processing, post-processing and ensemble models. Standard .bento format for packaging code, models and dependencies for easy versioning and deployment. Integrate with any training pipeline or ML experimentation platform. Parallelize compute-intense model inference...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    ROOT

    ROOT

    Analyzing, storing and visualizing big data, scientifically

    ROOT is a unified software package for the storage, processing, and analysis of scientific data: from its acquisition to the final visualization in the form of highly customizable, publication-ready plots. It is reliable, performant and well supported, easy to use and obtain, and strives to maximize the quantity and impact of scientific results obtained per unit cost, both of human effort and computing resources. ROOT provides a very efficient storage system for data models, that...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    MiMo-V2-Flash

    MiMo-V2-Flash

    MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation

    MiMo-V2-Flash is a large Mixture-of-Experts language model designed to deliver strong reasoning, coding, and agentic-task performance while keeping inference fast and cost-efficient. It uses an MoE setup where a very large total parameter count is available, but only a smaller subset is activated per token, which helps balance capability with runtime efficiency. The project positions the model for workflows that require tool use, multi-step planning, and higher throughput, rather than only single-turn chat. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    DSPy

    DSPy

    DSPy: The framework for programming—not prompting—language models

    Developed by the Stanford NLP Group, DSPy (Declarative Self-improving Python) is a framework that enables developers to program language models through compositional Python code rather than relying solely on prompt engineering. It facilitates the construction of modular AI systems and provides algorithms for optimizing prompts and weights, enhancing the quality and reliability of language model outputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    MNN

    MNN

    MNN is a blazing fast, lightweight deep learning framework

    MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models, and has industry leading performance for inference and training on-device. At present, MNN has been integrated in more than 20 apps of Alibaba Inc, such as Taobao, Tmall, Youku, Dingtalk, Xianyu and etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity...
    Downloads: 16 This Week
    Last Update:
    See Project
  • 18
    TurboDiffusion

    TurboDiffusion

    100–200× Acceleration for Video Diffusion Models

    ...The project targets large video models and enables developers to run accelerated generation even on single high-end GPUs, making fast video synthesis more practical for research and creative workflows. TurboDiffusion is structured to integrate with existing diffusion model architectures and provides tools for experimenting with and benchmarking speed and quality trade-offs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Foolbox

    Foolbox

    Python toolbox to create adversarial examples

    Foolbox: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, and JAX. Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks. Catch bugs before running your code thanks to extensive type annotations in Foolbox. Foolbox is a Python library that lets you easily run adversarial attacks against machine...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Audiblez

    Audiblez

    Generate audiobooks from e-books

    Audiblez is a tool for generating high-quality .m4b audiobooks directly from .epub e-books using the Kokoro-82M neural text-to-speech model. It focuses on making audiobook creation easy and fast: from a single command, the tool splits an e-book into chapters, synthesizes audio for each section, and then merges the results into a structured audiobook with chapter-based WAV files and a final .m4b container. The Kokoro-82M model it uses is compact (82M parameters) yet natural sounding, trained...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    NeuralForecast

    NeuralForecast

    Scalable and user friendly neural forecasting algorithms.

    NeuralForecast offers a large collection of neural forecasting models focusing on their performance, usability, and robustness. The models range from classic networks like RNNs to the latest transformers: MLP, LSTM, GRU, RNN, TCN, TimesNet, BiTCN, DeepAR, NBEATS, NBEATSx, NHITS, TiDE, DeepNPTS, TSMixer, TSMixerx, MLPMultivariate, DLinear, NLinear, TFT, Informer, AutoFormer, FedFormer, PatchTST, iTransformer, StemGNN, and TimeLLM. There is a shared belief in Neural forecasting methods'...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Solon

    Solon

    Java enterprise application development framework

    Solon is a full-scenario Java enterprise application framework that positions itself as a lean, high-performance alternative to heavy stacks. It advertises large concurrency gains, lower memory use, much faster startup, and dramatically smaller packages while remaining compatible from Java 8 through Java 24. The framework focuses on restrained APIs and an open ecosystem, with modules that cover web, data, cloud, and microservice patterns. Its messaging emphasizes “replaceable Spring”...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    UpTrain

    UpTrain

    Your open-source LLM evaluation toolkit

    Get scores for factual accuracy, context retrieval quality, guideline adherence, tonality, and many more. You can’t improve what you can’t measure. UpTrain continuously monitors your application's performance on multiple evaluation criterions and alerts you in case of any regressions with automatic root cause analysis. UpTrain enables fast and robust experimentation across multiple prompts, model providers, and custom configurations, by calculating quantitative scores for direct comparison and optimal prompt selection. Hallucinations have plagued LLMs since their inception. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    TensorFlow Model Garden

    TensorFlow Model Garden

    Models and examples built with TensorFlow

    The TensorFlow Model Garden is a repository with a number of different implementations of state-of-the-art (SOTA) models and modeling solutions for TensorFlow users. We aim to demonstrate the best practices for modeling so that TensorFlow users can take full advantage of TensorFlow for their research and product development. To improve the transparency and reproducibility of our models, training logs on TensorBoard.dev are also provided for models to the extent possible though not all models...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Smile

    Smile

    Statistical machine intelligence and learning engine

    Smile is a fast and comprehensive machine learning engine. With advanced data structures and algorithms, Smile delivers the state-of-art performance. Compared to this third-party benchmark, Smile outperforms R, Python, Spark, H2O, xgboost significantly. Smile is a couple of times faster than the closest competitor. The memory usage is also very efficient.
    Downloads: 0 This Week
    Last Update:
    See Project