Open standard for machine learning interoperability
Everything you need to build state-of-the-art foundation models
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Build Production-ready Agentic Workflow with Natural Language
A GPU-accelerated library containing highly optimized building blocks
Framework which allows you transform your Vector Database
The AI-native (edge and LLM) proxy for agents
Run serverless GPU workloads with fast cold starts on bare-metal
C++ library for high performance inference on NVIDIA GPUs
Uncover insights, surface problems, monitor, and fine tune your LLM
Probabilistic reasoning and statistical analysis in TensorFlow
A scalable inference server for models optimized with OpenVINO
Set of comprehensive computer vision & machine intelligence libraries
Adversarial Robustness Toolbox (ART) - Python Library for ML security
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Trainable models and NN optimization tools
An MLOps framework to package, deploy, monitor and manage models
Serving system for machine learning models
Powering Amazon custom machine learning chips
A library to communicate with ChatGPT, Claude, Copilot, Gemini
A toolkit to optimize ML models for deployment for Keras & TensorFlow
Framework that is dedicated to making neural data processing
LLMFlows - Simple, Explicit and Transparent LLM Apps