Showing 98 open source projects for "open dcl runtime"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • 1
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators...
    Downloads: 42 This Week
    Last Update:
    See Project
  • 2
    Monoio

    Monoio

    Rust async runtime based on io-uring

    Monoio is a Rust asynchronous runtime designed for high-performance I/O-bound servers and applications, built around native OS async I/O primitives (e.g. io_uring on Linux, epoll / kqueue on other Unix-like systems), rather than layering atop an existing runtime. Its design philosophy centers on a “thread-per-core” model where each core runs its own event loop, minimizing cross-thread synchronization needs, avoiding the overhead and complexity of task scheduling, and letting developers write...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    kokoro-onnx

    kokoro-onnx

    TTS with kokoro and onnx runtime

    kokoro-onnx is a text-to-speech toolkit that wraps the Kokoro neural TTS model in an easy-to-use ONNX Runtime interface, so you can generate speech from Python with minimal setup. It focuses on running efficiently on commodity hardware, including macOS with Apple Silicon, while still delivering near real-time performance for many use cases. The project ships prebuilt model files and a simple example script, so you can go from installation to producing an audio.wav file in just a few steps....
    Downloads: 17 This Week
    Last Update:
    See Project
  • 4
    LiteRT

    LiteRT

    LiteRT is the new name for TensorFlow Lite (TFLite)

    LiteRT is an experimental, real-time inference runtime built by Google AI Edge to run lightweight ML models on edge devices with ultra-low latency. It focuses on delivering predictable and consistent performance for models used in time-critical applications like robotics, AR/VR, and IoT. LiteRT is designed to be hardware-agnostic, with minimal dependencies and tight control over execution scheduling.
    Downloads: 8 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    ByteHook

    ByteHook

    ByteHook is an Android PLT hook library

    ByteHook is a ByteDance-hosted project whose name suggests a hooking or instrumentation library, likely used for hooking system calls or API calls for monitoring, sandboxing or instrumentation. The repository appears to aim at low-level hooking/injection capabilities, perhaps to support runtime introspection, behavioral monitoring, or hooking-based instrumentation (e.g. for security, tracing, sandboxing, or debugging). Because hooking is a common technique for intercepting library or system...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate...
    Downloads: 21 This Week
    Last Update:
    See Project
  • 7
    mlx

    mlx

    MLX: An array framework for Apple silicon

    MlX offers a local web interface to browse, download, and run ML models via Hugging Face or local sources. It supports searching by tags or tasks, visualization of model metadata, quick inference demos, automatic setup of runtime environments, and works with PyTorch, TensorFlow, and ONNX. Ideal for researchers exploring and testing models via browser.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    whisper.cpp

    whisper.cpp

    Port of OpenAI's Whisper model in C/C++

    whisper.cpp is a lightweight, C/C++ reimplementation of OpenAI’s Whisper automatic speech recognition (ASR) model—designed for efficient, standalone transcription without external dependencies. The entire high-level implementation of the model is contained in whisper.h and whisper.cpp. The rest of the code is part of the ggml machine learning library. The command downloads the base.en model converted to custom ggml format and runs the inference on all .wav samples in the folder samples....
    Downloads: 411 This Week
    Last Update:
    See Project
  • 9
    langrocks

    langrocks

    Tools like web browser, computer access and code runner for LLMs

    Langrocks is a programming language experimentation toolkit that enables developers to create, test, and optimize custom programming languages.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 10
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    ...This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
    Downloads: 36 This Week
    Last Update:
    See Project
  • 11
    Luminal

    Luminal

    Deep learning at the speed of light

    Luminal is a framework designed to accelerate and simplify the development of systems-level data applications by using a typed, functional, and streaming-first approach. Instead of treating data processing as a series of ad-hoc scripts, Luminal models transformations as strongly typed building blocks that can be composed into reliable, scalable pipelines. The project emphasizes correctness and performance by requiring explicit types for the data flowing through transformations, reducing...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    IREE

    IREE

    A retargetable MLIR-based machine learning compiler runtime toolkit

    IREE (Intermediate Representation Execution Environment, pronounced as "eerie") is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the data center and down to satisfy the constraints and special considerations of mobile and edge deployments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    The llama.cpp project enables the inference of Meta's LLaMA model (and other models) in pure C/C++ without requiring a Python runtime. It is designed for efficient and fast model execution, offering easy integration for applications needing LLM-based capabilities. The repository focuses on providing a highly optimized and portable implementation for running large language models directly within C/C++ environments.
    Downloads: 113 This Week
    Last Update:
    See Project
  • 14
    Spice.ai OSS

    Spice.ai OSS

    A self-hostable CDN for databases

    Spice is a portable runtime offering developers a unified SQL interface to materialize, accelerate, and query data from any database, data warehouse, or data lake. Spice connects, fuses, and delivers data to applications, machine-learning models, and AI backends, functioning as an application-specific, tier-optimized Database CDN. The Spice runtime, written in Rust, is built-with industry-leading technologies such as Apache DataFusion, Apache Arrow, Apache Arrow Flight, SQLite, and DuckDB....
    Downloads: 4 This Week
    Last Update:
    See Project
  • 15
    Elkeid

    Elkeid

    Open source solution that can meet the requirements of workloads

    ...Elkeid combines kernel-level data collection, user-space agents, and runtime instrumentation (RASP) to detect malicious behavior, file anomalies, runtime exploits, and suspicious container activity. For container or cloud-native workloads, it also supports gathering audit logs from Kubernetes and correlating events across processes, network, and file activity to detect security threats. The platform packages data collection, event-streaming, and a rule/event engine (called “HUB”) — letting users define detection rules, alerts, baseline checks, and policy enforcement.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    NVIDIA FLARE

    NVIDIA FLARE

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA Federated Learning Application Runtime Environment NVIDIA FLARE is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflows(PyTorch, TensorFlow, Scikit-learn, XGBoost etc.) to a federated paradigm. It enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 17
    InfiAgent

    InfiAgent

    Build your own Cowork, AI Scientist and other SoTA Agents

    infiAgent is an open-source AI agent framework for building powerful, long-running autonomous agents capable of tackling complex tasks without collapsing under growing context or tool invocation histories. Designed as a “Multi-Level Agent” (MLA) system, it externalizes persistent state to the file system so that agents can operate over unlimited runtime without the need for token-intensive context compression, enabling workflows such as research paper drafting, experiments, coding, and document generation to run reliably. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    WorkAny

    WorkAny

    Desktop Agent for Any Task

    WorkAny is an open-source desktop AI agent application that executes generic tasks through natural language commands, effectively bringing intelligent automation into everyday workflows without needing to write code manually. It acts as a unified environment where users can ask the AI to generate documents, presentations, websites, spreadsheets, organize files, or write code — all with real-time streaming outputs directly in the app, so you see results as the AI produces them. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    SGLang

    SGLang

    SGLang is a fast serving framework for large language models

    SGLang is a fast serving framework for large language models and vision language models. It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 20
    Vercel AI SDK

    Vercel AI SDK

    Build AI-powered applications with React, Svelte, Vue, and Solid

    The Vercel AI SDK is a library for building AI-powered streaming text and chat UIs.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 21
    MochiDiffusion

    MochiDiffusion

    Run Stable Diffusion on Mac natively

    MochiDiffusion is a native macOS application that allows users to run Stable Diffusion models locally, leveraging Apple Silicon GPU acceleration via Core ML. It offers users GUI controls for prompts and model configuration without needing Python or Docker, enabling offline image generation.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 22
    Agent Stack

    Agent Stack

    Deploy and share agents with open infrastructure

    Agent Stack is an open infrastructure platform designed to take AI agents from prototype to production, no matter how they were built. It includes a runtime environment, multi-tenant web UI, catalog of agents, and deployment flow that seeks to remove vendor lock-in and provide greater autonomy. Under the hood it’s built on the “Agent2Agent” (A2A) protocol, enabling interoperability between different agent ecosystems, runtime services, and frameworks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Upsonic

    Upsonic

    The most reliable AI agent framework that supports MCP

    Upsonic is a reliability-focused AI agent framework designed for real-world applications. It enables the development of trusted agent workflows within organizations by incorporating advanced reliability features, such as verification layers and output evaluation systems. The framework supports the Model Context Protocol (MCP), facilitating integration with various tools and enhancing agent capabilities. ​
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    E2B

    E2B

    Secure open source cloud runtime for AI apps & AI agents

    E2B's Code Interpreter SDK allows you to add code-interpreting capabilities to your AI apps. E2B Sandbox is a secure sandboxed cloud environment made for AI agents and AI apps. Sandboxes allow AI agents and apps to have long-running cloud secure environments. In these environments, large language models can use the same tools as humans do.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 25
    Agentex

    Agentex

    Open source codebase for Scale Agentex

    AgentEX is an open framework from Scale for building, running, and evaluating agentic workflows, with an emphasis on reproducibility and measurable outcomes rather than ad-hoc demos. It treats an “agent” as a composition of a policy (the LLM), tools, memory, and an execution runtime so you can test the whole loop, not just prompting. The repo focuses on structured experiments: standardized tasks, canonical tool interfaces, and logs that make it possible to compare models, prompts, and tool sets fairly. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • Next