Showing 131 open source projects for "latency"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Add Two Lines of Code. Get Full APM. Icon
    Add Two Lines of Code. Get Full APM.

    AppSignal installs in minutes and auto-configures dashboards, alerts, and error tracking.

    Works out of the box for Rails, Django, Express, Phoenix, and more. Monitoring exceptions and performance in no time.
    Start Free
  • 1
    FastRAG

    FastRAG

    Efficient Retrieval Augmentation and Generation Framework

    fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines, incorporating state-of-the-art LLMs and Information Retrieval. fastRAG is designed to empower researchers and developers with a comprehensive tool set for advancing retrieval augmented generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    LinuxPlay

    LinuxPlay

    An open-source, ultra-low-latency remote desktop for Linux hosts

    LinuxPlay is a multimedia playback project designed to provide a lightweight media player for Linux environments with support for common audio and video formats. It focuses on implementing playback functionality using FFmpeg and native Linux APIs to ensure efficient decoding and rendering. The project demonstrates how to build a media player from scratch, including handling audio-video synchronization and playback controls. It supports local file playback and may include streaming...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 3
    Nixtla TimeGPT

    Nixtla TimeGPT

    TimeGPT-1: production ready pre-trained Time Series Foundation Model

    TimeGPT is a production ready, generative pretrained transformer for time series. It's capable of accurately predicting various domains such as retail, electricity, finance, and IoT with just a few lines of code. Whether you're a bank forecasting market trends or a startup predicting product demand, TimeGPT democratizes access to cutting-edge predictive insights, eliminating the need for a dedicated team of machine learning engineers. A generative model for time series. TimeGPT is capable of...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Lemonade

    Lemonade

    Lemonade helps users run local LLMs with the highest performance

    ...The repository highlights easy onboarding with downloads, docs, and a Discord for support, suggesting an active user community. Messaging centers on squeezing maximum throughput/latency from modern accelerators without users having to hand-tune kernels or flags. Releases further reinforce the “server” framing, pointing developers toward a service that can be integrated into apps and tools.
    Downloads: 8 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 5
    Magika

    Magika

    Fast and accurate AI powered file content types detection

    Magika is an AI-powered file-type detector that uses a compact deep-learning model to classify binary and textual files with high accuracy and very low latency. The model is engineered to be only a few megabytes and to run quickly even on CPU-only systems, making it practical for desktop apps, servers, and security pipelines. Magika ships as a command-line tool and a library, providing drop-in detection that improves on traditional “magic number” and heuristic approaches, especially for ambiguous or short files. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    Qwen2.5-Omni

    Qwen2.5-Omni

    Capable of understanding text, audio, vision, video

    ...Very strong benchmark performance across modalities (audio understanding, speech recognition, image/video reasoning) and often outperforming or matching single-modality models at a similar scale. Real-time streaming responses, including natural speech synthesis (text-to-speech) and chunked inputs for low latency interaction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    ty

    ty

    An extremely fast Python type checker and language server

    ...It is positioned as a next-generation alternative to tools such as mypy and Pyright, offering significantly faster performance through incremental analysis and optimized execution. The tool is designed from the ground up to power editor integrations, enabling real-time feedback as developers write code with minimal latency. ty includes advanced type system capabilities such as intersection types, improved type inference, and detailed diagnostics that help identify issues even in partially typed codebases. It supports integration with multiple development environments through its language server implementation, providing features like code navigation, auto-completion, and inline hints.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    PostHog

    PostHog

    PostHog provides open-source web & product analytics

    ...Run custom filters and transformations on your incoming data. Send it to 25+ tools or any webhook in real time or batch export large amounts to your warehouse. Capture traces, generations, latency, and cost for your LLM-powered app.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 9
    Quantitative Trading System

    Quantitative Trading System

    A comprehensive quantitative trading system with AI-powered analysis

    Quantitative Trading System is a comprehensive quantitative trading platform that integrates artificial intelligence, financial data analysis, and automated strategy execution within a unified software system. The project is designed to provide an end-to-end infrastructure for building and operating algorithmic trading strategies in financial markets. It includes tools for collecting and processing market data from multiple sources, performing statistical and machine learning analysis, and...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 10
    LiveAvatar

    LiveAvatar

    Streaming Real-time Audio-Driven Avatar Generation

    LiveAvatar is an open-source research and implementation project that provides a unified framework for real-time, streaming, interactive avatar video generation driven by audio and other control signals. It implements techniques from state-of-the-art diffusion-based avatar modeling to support infinite-length continuous video generation with low latency, enabling interactive AI avatars that maintain continuity and realism over extended sessions. The project co-designs algorithms and system optimizations, such as block-wise autoregressive processing and fast sampling strategies, to deliver real-time frame rates (e.g., ~45 FPS on appropriate GPU clusters) while handling non-stop generation without quality degradation. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    gpt-oss

    gpt-oss

    gpt-oss-120b and gpt-oss-20b are two open-weight language models

    ...The series includes two main models: gpt-oss-120b, a 117-billion parameter model optimized for general-purpose, high-reasoning tasks that can run on a single H100 GPU, and gpt-oss-20b, a lighter 21-billion parameter model ideal for low-latency or specialized applications on smaller hardware. Both models use a native MXFP4 quantization for efficient memory use and support OpenAI’s Harmony response format, enabling transparent full chain-of-thought reasoning and advanced tool integrations such as function calling, browsing, and Python code execution. The repository provides multiple reference implementations—including PyTorch, Triton, and Metal—for educational and experimental use, as well as example clients and tools like a terminal chat app and a Responses API server.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 12
    Guidance

    Guidance

    A guidance language for controlling large language models

    Guidance is an efficient programming paradigm for steering language models. With Guidance, you can control how output is structured and get high-quality output for your use case—while reducing latency and cost vs. conventional prompting or fine-tuning. It allows users to constrain generation (e.g. with regex and CFGs) as well as to interleave control (conditionals, loops, tool use) and generation seamlessly.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    VidGear

    VidGear

    A High-performance cross-platform Video Processing Python framework

    ...The framework is built around modular components called “gears,” each responsible for tasks such as video capture, streaming, encoding, and network transmission. It supports multi-threaded and asynchronous operations, enabling low-latency processing and efficient handling of high-throughput video streams. VidGear is designed to handle a wide range of use cases, including live streaming, video stabilization, screencasting, and distributed video systems. Its emphasis on simplicity allows developers to implement advanced multimedia pipelines with minimal code while maintaining performance and flexibility.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    PersonaLive

    PersonaLive

    Expressive Portrait Image Animation for Live Streaming

    ...It leverages deep generative models that condition on a static reference image and a driving input (such as motion or expression cues) to produce a seamless animated portrait sequence that can run indefinitely without segmentation artifacts. The framework prioritizes low-latency and streamable output, making it suitable for real-time creative workflows, broadcast overlays, or interactive avatars on consumer-grade GPUs. PersonaLive’s architecture balances visual quality and efficiency by combining motion encoding, temporal modules, and hybrid implicit control signals to preserve identity and stable expression through long sequences.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    CodeLlama

    CodeLlama

    Inference code for CodeLlama models

    Code Llama is a family of Llama-based code models optimized for programming tasks such as code generation, completion, and repair, with variants specialized for base coding, Python, and instruction following. The repo documents the sizes and capabilities (e.g., 7B, 13B, 34B) and highlights features like infilling and large input context to support real IDE workflows. It targets both general software synthesis and language-specific productivity, offering strong performance among open models...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    DeepSeek-OCR

    DeepSeek-OCR

    Contexts Optical Compression

    ...The system treats OCR not simply as “read the text” but as “understand what the text is doing in the image”—for example distinguishing captions from body text, interpreting tables, or recognizing handwritten versus printed words. It supports local deployment, enabling organizations concerned about privacy or latency to run the pipeline on-premises rather than send sensitive documents to third-party cloud services. The codebase is written in Python with a focus on modularity: you can swap preprocessing, recognition, and post-processing components as needed for custom workflows.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 17
    SenseVoice

    SenseVoice

    Multilingual speech recognition and audio understanding model

    SenseVoice is a speech foundation model designed to perform multiple voice understanding tasks from audio input. It provides capabilities such as automatic speech recognition, spoken language identification, speech emotion recognition, and audio event detection within a single system. SenseVoice is trained on more than 400,000 hours of speech data and supports over 50 languages for multilingual recognition tasks. It is built to achieve high transcription accuracy while maintaining efficient...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    Devon

    Devon

    Open source AI pair programmer for coding, debugging, automation

    ...Devon uses a client-server architecture with a Python backend and multiple user interfaces, including a terminal interface and an Electron-based desktop application. Devon integrates with multiple large language models, allowing users to choose between different providers for performance, cost, and latency considerations. It is capable of performing tasks such as debugging, writing tests, analyzing code structure, and navigating complex repositories. Devon also includes features for session management, enabling users to start, pause, and revert actions while maintaining context.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    H2O Wave

    H2O Wave

    Realtime Web Apps and Dashboards for Python and R

    No HTML, CSS, Javascript skills are required. Build rich, interactive web apps using pure Python. Broadcast live information, visualizations, and graphics using Wave's low-latency real-time server. Instant control over every connected web browser using a simple and intuitive programming model. Preview your app live as you code. Dramatically reduce the time and effort to build web apps. Easily share your apps with end-users, get feedback, improve and iterate. ~10MB static executables for Linux, Windows, OSX, BSD, Solaris on AMD64, 386, ARM, PPC. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    django-prometheus

    django-prometheus

    Export Django monitoring metrics for Prometheus.io

    ...This library provides Prometheus metrics for Django-related operations. Prometheus uses Histogram based grouping for monitoring latencies. You can define custom buckets for latency, adding more buckets decreases performance but increases accuracy. SQLite, MySQL, and PostgreSQL databases can be monitored. Just replace the ENGINE property of your database, replacing django.db.backends with django_prometheus.db.backends. You may want to monitor the creation/deletion/update rate for your model. This can be done by adding a mixin to them. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Nothing Ever Happens

    Nothing Ever Happens

    Focused async Python bot for Polymarket

    ...The project is built in Python using asynchronous architecture, allowing it to monitor markets, evaluate opportunities, and execute trades continuously with minimal latency. Its core concept is based on statistical observations that a majority of prediction market outcomes resolve negatively, and it attempts to exploit this base-rate bias through systematic participation rather than predictive modeling. The bot includes a safety-oriented design with explicit environment variable requirements to enable live trading, ensuring that users consciously opt into real financial risk, along with a paper trading mode for testing without capital exposure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Pipecat

    Pipecat

    Framework for building real-time voice and multimodal AI agents

    ...It provides developers with tools to orchestrate complex pipelines that combine speech recognition, language models, audio processing, and speech synthesis into a cohesive conversational system. Pipecat focuses on low-latency interactions so voice conversations with AI feel natural and responsive during live use. Pipecat allows applications to integrate multiple AI services and transports, enabling flexible deployment across different environments and communication channels. Developers can create a wide range of interactive systems including voice assistants, customer service agents, interactive storytelling applications, and multimodal interfaces that combine voice, video, images, and text. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    xFormers

    xFormers

    Hackable and optimized Transformers building blocks

    xformers is a modular, performance-oriented library of transformer building blocks, designed to allow researchers and engineers to compose, experiment, and optimize transformer architectures more flexibly than monolithic frameworks. It abstracts components like attention layers, feedforward modules, normalization, and positional encoding, so you can mix and match or swap optimized kernels easily. One of its key goals is efficient attention: it supports dense, sparse, low-rank, and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    TensorRT LLM

    TensorRT LLM

    TensorRT LLM provides users with an easy-to-use Python API

    ...It provides a Python-based API built on top of PyTorch that allows developers to define, customize, and deploy LLMs efficiently across a variety of hardware configurations, from single GPUs to large multi-node clusters. The library focuses on maximizing throughput and minimizing latency through advanced techniques such as quantization, custom attention kernels, and optimized memory management strategies. It includes support for cutting-edge inference methods like speculative decoding and inflight batching, enabling real-time and large-scale AI applications. TensorRT-LLM integrates seamlessly with NVIDIA’s broader inference ecosystem, including Triton Inference Server and distributed deployment frameworks, making it suitable for production environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Text Embeddings Inference

    Text Embeddings Inference

    High-performance inference server for text embeddings models API layer

    ...It provides an API interface that allows developers to integrate embedding capabilities into applications without managing model internals directly. Text Embeddings Inference is optimized for throughput and low latency, enabling it to handle large volumes of requests reliably. It also emphasizes ease of deployment, often using containerization and configurable runtime options to adapt to different infrastructure setups.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB