Run Local LLMs on Any Device. Open-source
ONNX Runtime: cross-platform, high performance ML inferencing
High-performance neural network inference framework for mobile
Protect and discover secrets using Gitleaks
An MLOps framework to package, deploy, monitor and manage models
Official inference library for Mistral models
AIMET is a library that provides advanced quantization and compression
Unified Model Serving Framework
Neural Network Compression Framework for enhanced OpenVINO
Standardized Serverless ML Inference Platform on Kubernetes
LLM.swift is a simple and readable library
Set of comprehensive computer vision & machine intelligence libraries
A general-purpose probabilistic programming system
Library for serving Transformers models on Amazon SageMaker
Superduper: Integrate AI models and machine learning workflows
Easy-to-use deep learning framework with 3 key features
A unified framework for scalable computing
A set of Docker images for training and serving models in TensorFlow
Powering Amazon custom machine learning chips
A GPU-accelerated library containing highly optimized building blocks
Build Production-ready Agentic Workflow with Natural Language
Deep learning optimization library: makes distributed training easy
OpenMMLab Model Deployment Framework
Framework that is dedicated to making neural data processing
A real time inference engine for temporal logical specifications