Superduper: Integrate AI models and machine learning workflows
Build Production-ready Agentic Workflow with Natural Language
C++ library for high performance inference on NVIDIA GPUs
Protect and discover secrets using Gitleaks
Neural Network Compression Framework for enhanced OpenVINO
AIMET is a library that provides advanced quantization and compression
Official inference library for Mistral models
Powering Amazon custom machine learning chips
ONNX Runtime: cross-platform, high performance ML inferencing
Standardized Serverless ML Inference Platform on Kubernetes
A general-purpose probabilistic programming system
A unified framework for scalable computing
A set of Docker images for training and serving models in TensorFlow
Deep Learning API and Server in C++14 support for Caffe, PyTorch
The unofficial python package that returns response of Google Bard
Guide to deploying deep-learning inference networks
Toolkit for allowing inference and serving with MXNet in SageMaker
Deep learning inference framework optimized for mobile platforms
Uniform deep learning inference framework for mobile