Showing 31 open source projects for "runtime"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 1
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc.
    Downloads: 49 This Week
    Last Update:
    See Project
  • 2
    IREE

    IREE

    A retargetable MLIR-based machine learning compiler runtime toolkit

    IREE (Intermediate Representation Execution Environment, pronounced as "eerie") is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the data center and down to satisfy the constraints and special considerations of mobile and edge deployments.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 3
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate into the JIT runtime seamlessly. ...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 4
    mlx

    mlx

    MLX: An array framework for Apple silicon

    MlX offers a local web interface to browse, download, and run ML models via Hugging Face or local sources. It supports searching by tags or tasks, visualization of model metadata, quick inference demos, automatic setup of runtime environments, and works with PyTorch, TensorFlow, and ONNX. Ideal for researchers exploring and testing models via browser.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Knowledge Is The Cornerstone Of Your Business. Icon
    Knowledge Is The Cornerstone Of Your Business.

    MentorCity works best for associations, companies, and schools

    MentorCity is a comprehensive and intuitive online mentoring software cultivating the future of organizations and individuals through the power of connections and relationships. Specifically created for associations, companies, and schools, MentorCity assists in member engagement, leadership development, succession planning, and diversity and inclusion strategies. Serving as a powerful personal development tool, MentorCity helps organizations save time and money by lessening matching responsibilities and allowing program administrator to focus their efforts on building a mentoring culture that achieves exceptional results.
    Learn More
  • 5
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    ...Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
    Downloads: 24 This Week
    Last Update:
    See Project
  • 6
    Spice.ai OSS

    Spice.ai OSS

    A self-hostable CDN for databases

    Spice is a portable runtime offering developers a unified SQL interface to materialize, accelerate, and query data from any database, data warehouse, or data lake. Spice connects, fuses, and delivers data to applications, machine-learning models, and AI backends, functioning as an application-specific, tier-optimized Database CDN. The Spice runtime, written in Rust, is built-with industry-leading technologies such as Apache DataFusion, Apache Arrow, Apache Arrow Flight, SQLite, and DuckDB. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Elyra

    Elyra

    Elyra extends JupyterLab with an AI centric approach

    Elyra is a set of AI-centric extensions to JupyterLab Notebooks. The Elyra Getting Started Guide includes more details on these features. A version-specific summary of new features is located on the releases page.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 8
    ExecuTorch

    ExecuTorch

    On-device AI across mobile, embedded and edge for PyTorch

    ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of PyTorch models to edge devices.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 9
    NVIDIA FLARE

    NVIDIA FLARE

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA Federated Learning Application Runtime Environment NVIDIA FLARE is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflows(PyTorch, TensorFlow, Scikit-learn, XGBoost etc.) to a federated paradigm. It enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Optimize your quoting process with DrayMaster, our intuitive rate management solution designed specifically for truckers and brokers. Icon
    Optimize your quoting process with DrayMaster, our intuitive rate management solution designed specifically for truckers and brokers.

    The easier way to manage your intermodal rates

    DrayMaster is the go-to tool for drayage carrier rate management, quicker quoting and improved collaboration, providing a simple web interface for looking up rates or integrating into an existing TMS. It is the only rate management solution that captures complete drayage rates beyond line haul and fuel surcharge with a full view to all accessorials, and the only neutral tariff management system and digital delivery platform designed specifically for the drayage industry. DrayMaster can also connect 3PLs, freight forwarders, and brokers with drayage providers utilizing great features including streamlined rate management and quoting, professional quotes that include all accessorials, simplified administration of complex rates, real-time access to current rate information, and a simple bid tool for quick RFP quoting. With DrayMaster, spend less time on challenging intermodal rates and more time growing your business.
    Learn More
  • 10
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded, or automotive product platforms. ...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 11
    AWS Neuron

    AWS Neuron

    Powering Amazon custom machine learning chips

    ...Neuron is pre-integrated into popular machine learning frameworks like TensorFlow, MXNet and Pytorch to provide a seamless training-to-inference workflow. It includes a compiler, runtime driver, as well as debug and profiling utilities with a TensorBoard plugin for visualization.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Porcupine

    Porcupine

    On-device wake word detection powered by deep learning

    ...Chrome, Safari, Firefox, and Edge. Linux (x86_64), macOS (x86_64, arm64), and Windows (x86_64). Scalable. It can detect multiple always-listening voice commands with no added runtime footprint. Self-service. Developers can train custom wake word models using Picovoice Console. Porcupine is the right product if you need to detect one or a few static (always-listening) voice commands. If you want to create voice experiences similar to Alexa or Google, see the Picovoice platform.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 13
    OneFlow

    OneFlow

    OneFlow is a deep learning framework designed to be user-friendly

    OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient. An extension for OneFlow to target third-party compiler, such as XLA, TensorRT and OpenVINO etc.CUDA runtime is statically linked into OneFlow. OneFlow will work on a minimum supported driver, and any driver beyond. For more information. Distributed performance (efficiency) is the core technical difficulty of the deep learning framework. OneFlow focuses on performance improvement and heterogeneous distributed expansion. It adheres to the core concept and architecture of static compilation and streaming parallelism and solves the memory wall challenge at the cluster level. world-leading level. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    TFX

    TFX

    TFX is an end-to-end platform for deploying production ML pipelines

    ...Both the components themselves and the integrations with orchestration systems can be extended. TFX components interact with an ML Metadata backend that keeps a record of component runs, input and output artifacts, and runtime configuration. This metadata backend enables advanced functionality like experiment tracking or warm starting/resuming ML models from previous runs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    omegaml

    omegaml

    MLOps simplified. From ML Pipeline ⇨ Data Product without the hassle

    omega|ml is the innovative Python-native MLOps platform that provides a scalable development and runtime environment for your Data Products. Works from laptop to cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    tf2onnx

    tf2onnx

    Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX

    tf2onnx converts TensorFlow (tf-1.x or tf-2.x), keras, tensorflow.js and tflite models to ONNX via command line or python API. Note: tensorflow.js support was just added. While we tested it with many tfjs models from tfhub, it should be considered experimental. TensorFlow has many more ops than ONNX and occasionally mapping a model to ONNX creates issues. tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found. We support and test ONNX...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    Seldon Core

    Seldon Core

    An MLOps framework to package, deploy, monitor and manage models

    The de facto standard open-source platform for rapidly deploying machine learning models on Kubernetes. Seldon Core, our open-source framework, makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. Seldon Core serves models built in any open-source or commercial model building framework. You can make use of powerful Kubernetes features like custom resource definitions to manage model graphs. And then connect your continuous integration and...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    ...To train a model, you can include your training script and dependencies in a Docker container that runs your training code. A container provides an effectively isolated environment, ensuring a consistent runtime and reliable training process. The SageMaker Training Toolkit can be easily added to any Docker container, making it compatible with SageMaker for training models. If you use a prebuilt SageMaker Docker image for training, this library may already be included. Write a training script (eg. train.py). Define a container with a Dockerfile that includes the training script and any dependencies.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    oneDNN

    oneDNN

    oneAPI Deep Neural Network Library (oneDNN)

    This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) and Deep Neural Network Library (DNNL). oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. oneDNN is part of oneAPI. The library is optimized for Intel(R) Architecture Processors, Intel Processor Graphics and Xe Architecture graphics. oneDNN has experimental support for the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    MMDeploy

    MMDeploy

    OpenMMLab Model Deployment Framework

    ...All kinds of modules in the SDK can be extended, such as Transform for image processing, Net for Neural Network inference, Module for postprocessing and so on. Install and build your target backend. ONNX Runtime is a cross-platform inference and training accelerator compatible with many popular ML/DNN frameworks. Please read getting_started for the basic usage of MMDeploy.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    CLIP-as-service

    CLIP-as-service

    Embed images and sentences into fixed-length vectors

    CLIP-as-service is a low-latency high-scalability service for embedding images and text. It can be easily integrated as a microservice into neural search solutions. Serve CLIP models with TensorRT, ONNX runtime and PyTorch w/o JIT with 800QPS[*]. Non-blocking duplex streaming on requests and responses, designed for large data and long-running tasks. Horizontally scale up and down multiple CLIP models on single GPU, with automatic load balancing. Easy-to-use. No learning curve, minimalist design on client and server. Intuitive and consistent API for image and sentence embedding. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    SageMaker Inference Toolkit

    SageMaker Inference Toolkit

    Serve machine learning models within a Docker container

    ...Once you have a trained model, you can include it in a Docker container that runs your inference code. A container provides an effectively isolated environment, ensuring a consistent runtime regardless of where the container is deployed. Containerizing your model and code enables fast and reliable deployment of your model. The SageMaker Inference Toolkit implements a model serving stack and can be easily added to any Docker container, making it deployable to SageMaker. This library's serving stack is built on Multi Model Server, and it can serve your own models or those you trained on SageMaker using machine learning frameworks with native SageMaker support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    KotlinDL

    KotlinDL

    High-level Deep Learning Framework written in Kotlin

    KotlinDL is a high-level Deep Learning API written in Kotlin and inspired by Keras. Under the hood, it uses TensorFlow Java API and ONNX Runtime API for Java. KotlinDL offers simple APIs for training deep learning models from scratch, importing existing Keras and ONNX models for inference, and leveraging transfer learning for tailoring existing pre-trained models to your tasks. This project aims to make Deep Learning easier for JVM and Android developers and simplify deploying deep learning models in production environments.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    Hugging Face Transformer

    Hugging Face Transformer

    CPU/GPU inference server for Hugging Face transformer models

    ...Most tutorials on Transformer deployment in production are built over Pytorch and FastAPI. Both are great tools but not very performant in inference. Then, if you spend some time, you can build something over ONNX Runtime and Triton inference server. You will usually get from 2X to 4X faster inference compared to vanilla Pytorch. It's cool! However, if you want the best in class performances on GPU, there is only a single possible combination: Nvidia TensorRT and Triton. You will usually get 5X faster inference compared to vanilla Pytorch.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes obligatory when running a model. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next