Uplift modeling and causal inference with machine learning algorithms
Uncover insights, surface problems, monitor, and fine tune your LLM
Standardized Serverless ML Inference Platform on Kubernetes
Unified Model Serving Framework
A high-performance ML model serving framework, offers dynamic batching
A toolkit to optimize ML models for deployment for Keras & TensorFlow
An MLOps framework to package, deploy, monitor and manage models
Open-source tool designed to enhance the efficiency of workloads
OpenMMLab Model Deployment Framework
Adversarial Robustness Toolbox (ART) - Python Library for ML security
Superduper: Integrate AI models and machine learning workflows
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Probabilistic reasoning and statistical analysis in TensorFlow
Integrate, train and manage any AI models and APIs with your database
Powering Amazon custom machine learning chips
Serve machine learning models within a Docker container
The Triton Inference Server provides an optimized cloud
Toolbox of models, callbacks, and datasets for AI/ML researchers
Deploy a ML inference service on a budget in 10 lines of code