Showing 116 open source projects for "time"

View related business solutions
  • Retool your internal operations Icon
    Retool your internal operations

    Generate secure, production-grade apps that connect to your business data. Not just prototypes, but tools your team can actually deploy.

    Build internal software that meets enterprise security standards without waiting on engineering resources. Retool connects to your databases, APIs, and data sources while maintaining the permissions and controls you need. Create custom dashboards, admin tools, and workflows from natural language prompts—all deployed in your cloud with security baked in. Stop duct-taping operations together, start building in Retool.
    Build an app in Retool
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 1
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate into the JIT runtime seamlessly. ...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 2
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ...ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 54 This Week
    Last Update:
    See Project
  • 3
    ChatLLM.cpp

    ChatLLM.cpp

    Pure C++ implementation of several models for real-time chatting

    chatllm.cpp is a pure C++ implementation designed for real-time chatting with Large Language Models (LLMs) on personal computers, supporting both CPU and GPU executions. It enables users to run various LLMs ranging from less than 1 billion to over 300 billion parameters, facilitating responsive and efficient conversational AI experiences without relying on external servers.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 4
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    OpenCV (Open Source Computer Vision Library) is a comprehensive open-source library for computer vision, machine learning, and image processing. It enables developers to build real-time vision applications ranging from facial recognition to object tracking. OpenCV supports a wide range of programming languages including C++, Python, and Java, and is optimized for both CPU and GPU operations.
    Downloads: 13 This Week
    Last Update:
    See Project
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • 5
    LiteRT

    LiteRT

    LiteRT is the new name for TensorFlow Lite (TFLite)

    LiteRT is an experimental, real-time inference runtime built by Google AI Edge to run lightweight ML models on edge devices with ultra-low latency. It focuses on delivering predictable and consistent performance for models used in time-critical applications like robotics, AR/VR, and IoT. LiteRT is designed to be hardware-agnostic, with minimal dependencies and tight control over execution scheduling.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    MuJoCo MPC

    MuJoCo MPC

    Real-time behaviour synthesis with MuJoCo, using Predictive Control

    MuJoCo MPC (MJPC) is an advanced interactive framework for real-time model predictive control (MPC) built on top of the MuJoCo physics engine, developed by Google DeepMind. It allows researchers and roboticists to design, visualize, and execute complex control tasks for simulated or real robotic systems. MJPC integrates a high-performance GUI and multiple predictive control algorithms, including iLQG, gradient descent, and Predictive Sampling — a competitive, derivative-free method that achieves robust real-time control. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    AWS IoT FleetWise Edge

    AWS IoT FleetWise Edge

    AWS IoT FleetWise Edge Agent

    Easily collect, transform, and transfer vehicle data to the cloud in near-real-time. AWS IoT FleetWise makes it easy and cost-effective for automakers to collect, transform, and transfer vehicle data to the cloud in near-real-time and use it to build applications with analytics and machine learning that improve vehicle quality, safety, and autonomy. Train autonomous vehicles (AVs) and advanced driver assistance systems (ADAS) with camera data collected from a fleet of production vehicles. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    OpenMLDB

    OpenMLDB

    OpenMLDB is an open-source machine learning database

    ...It prioritizes the capability of feature engineering using SQL for open-source, which offers a feature platform enabling consistent features for training and inference. Real-time features are essential for many machine learning applications, such as real-time personalized recommendations and risk analytics. However, a feature engineering script developed by data scientists (Python scripts in most cases) cannot be directly deployed into production for online inference because it usually cannot meet the engineering requirements, such as low latency, high throughput and high availability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    DeepGEMM

    DeepGEMM

    Clean and efficient FP8 GEMM kernels with fine-grained scaling

    ...It supports both standard and “grouped” GEMMs, which is useful for architectures like Mixture of Experts (MoE) that require segmented matrix multiplications. One distinguishing aspect is that DeepGEMM compiles its kernels at runtime (via a lightweight Just-In-Time (JIT) module), so users don’t need to precompile CUDA kernels before installation. Despite its lean design, it includes scaling strategies (fine-grained scaling) and optimizations inspired by cutting edge systems (drawing from ideas in CUTLASS, CuTe) but in a more streamlined form.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 10
    IREE

    IREE

    A retargetable MLIR-based machine learning compiler runtime toolkit

    IREE (Intermediate Representation Execution Environment, pronounced as "eerie") is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the data center and down to satisfy the constraints and special considerations of mobile and edge deployments.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    ChatGLM.cpp

    ChatGLM.cpp

    C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)

    ChatGLM.cpp is a C++ implementation of the ChatGLM-6B model, enabling efficient local inference without requiring a Python environment. It is optimized for running on consumer hardware.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    ESP32-CAM_MJPEG2SD

    ESP32-CAM_MJPEG2SD

    ESP32 Camera motion capture application to record JPEGs to SD card

    Application for ESP32 / ESP32S3 with OV2640 / OV5640 camera to record JPEGs to SD card as AVI files and playback to the browser as an MJPEG stream. The AVI format allows recordings to replay at the correct frame rate on media players. If a microphone is installed then a WAV file is also created and stored in the AVI file. The ESP32 cannot support all of the features as it will run out of heap space. For better functionality and performance, use one of the new ESP32S3 camera boards, eg...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    Vespa

    Vespa

    The open big data serving engine

    Make AI-driven decisions using your data, in real-time. At any scale, with unbeatable performance. Vespa is a full-featured text search engine and supports both regular text search and fast approximate vector search (ANN). This makes it easy to create high-performing search applications at any scale, whether you want to use traditional techniques or a modern vector-based approach. You can even combine both approaches efficiently in the same query, something no other engine can do. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    FlashMLA

    FlashMLA

    FlashMLA: Efficient Multi-head Latent Attention Kernels

    FlashMLA is a high-performance decoding kernel library designed especially for Multi-Head Latent Attention (MLA) workloads, targeting NVIDIA Hopper GPU architectures. It provides optimized kernels for MLA decoding, including support for variable-length sequences, helping reduce latency and increase throughput in model inference systems using that attention style. The library supports both BF16 and FP16 data types, and includes a paged KV cache implementation with a block size of 64 to...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    cuML

    cuML

    RAPIDS Machine Learning Library

    cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn. For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    OpenVINO Model Server

    OpenVINO Model Server

    A scalable inference server for models optimized with OpenVINO

    OpenVINO™ Model Server is a high-performance inference serving system designed to host and serve machine learning models that have been optimized with the OpenVINO toolkit. It’s implemented in C++ for scalability and efficiency, making it suitable for both edge and cloud deployments where inference workloads must be reliable and high throughput. The server exposes model inference via standard network protocols like REST and gRPC, allowing any client that speaks those protocols to request...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    TensorFlow Serving

    TensorFlow Serving

    Serving system for machine learning models

    TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It deals with the inference aspect of machine learning, taking models after training and managing their lifetimes, providing clients with versioned access via a high-performance, reference-counted lookup table. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data. The...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    DeepDetect

    DeepDetect

    Deep Learning API and Server in C++14 support for Caffe, PyTorch

    ...While the Open Source Deep Learning Server is the core element, with REST API, and multi-platform support that allows training & inference everywhere, the Deep Learning Platform allows higher level management for training neural network models and using them as if they were simple code snippets. Ready for applications of image tagging, object detection, segmentation, OCR, Audio, Video, Text classification, CSV for tabular data and time series. Neural network templates for the most effective architectures for GPU, CPU, and Embedded devices. Training in a few hours and with small data thanks to 25+ pre-trained models. Full Open Source, with an ecosystem of tools (API clients, video, annotation, ...) Fast Server written in pure C++, a single codebase for Cloud, Desktop & Embedded.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    The Open Source Computer Vision Library has >2500 algorithms, extensive documentation and sample code for real-time computer vision. It works on Windows, Linux, Mac OS X, Android, iOS in your browser through JavaScript. Languages: C++, Python, Julia, Javascript Homepage: https://opencv.org Q&A forum: https://forum.opencv.org/ Documentation: https://docs.opencv.org Source code: https://github.com/opencv Please pay special attention to our tutorials!
    Leader badge
    Downloads: 3,046 This Week
    Last Update:
    See Project
  • 20
    QChartist

    QChartist

    Free and Open Source Technical Analysis Charting Software

    ...It is now faster and much more professional thanks to the use of a C++ layer (used mostly for calculations) over the standard Basic layer (used mostly for the GUI interface). You can use astro indicators and functions from a library for astronomical calculations. You can get real time quotations thanks to Yahoo Finance, Alpha Vantage, Tiingo, Stooq, Finnhub and Twelvedata data sources.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    DocWire SDK

    DocWire SDK

    Award-winning modern data processing SDK in C++20

    ...DocWire SDK aims to expand its capabilities, focusing on versatile data extraction, platform support, and seamless integration with various systems. DocWire SDK is dedicated to streamlining data processing, reducing development time and costs, and harnessing the potential of AI. Its advancements promise a superior experience compared to its predecessor, DocToText.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 22

    Astrape

    Optical-packet node transceiver frequency allocation

    In an optical network scenario which consists of multiple nodes (whiteboxes) at its edges and ROADMs in-between, the coherent transceiver average laser configuration time is improved. The process is evaluated according to a testbed setup. This is facilitated in the appropriate lab equipment (or via simulation when required). For that purpose, a software agent (Netconf server) residing at the whiteboxes, is developed receiving input from the Software-Defined Networking (SDN) packet controller (PacketCTL - a Netconf client). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    OCR Manga Reader for Android

    OCR Manga Reader for Android

    Android Manga reader with Japanese OCR and dictionary capabilities

    OCR Manga Reader is a free and open source Android app that allows you to quickly OCR and lookup Japanese words in real-time. It does not have ads or telemetry/spyware and does not require an Internet connection. Supports both EDICT and EPWING dictionaries. Requires Android 4.0 (Ice Cream Sandwich) or higher. See http://ocrmangareaderforandroid.sourceforge.net/ for details.
    Leader badge
    Downloads: 38 This Week
    Last Update:
    See Project
  • 24
    Bullet Physics SDK

    Bullet Physics SDK

    Real-time collision detection and multi-physics simulation for VR

    This is the official C++ source code repository of the Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc. We are developing a new differentiable simulator for robotics learning, called Tiny Differentiable Simulator, or TDS. The simulator allows for hybrid simulation with neural networks. It allows different automatic differentiation backends, for forward and reverse mode gradients.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 25
    Coqui STT

    Coqui STT

    The deep learning toolkit for speech-to-text

    Coqui STT is a fast, open-source, multi-platform, deep-learning toolkit for training and deploying speech-to-text models. Coqui STT is battle-tested in both production and research. Multiple possible transcripts, each with an associated confidence score. Experience the immediacy of script-to-performance. With Coqui text-to-speech, production times go from months to minutes. With Coqui, the post is a pleasure. Effortlessly clone the voices of your talent and have the clone handle the problems...
    Downloads: 4 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next