DoWhy is a Python library for causal inference
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Uplift modeling and causal inference with machine learning algorithms
Operating LLMs in production
Trainable models and NN optimization tools
Unified Model Serving Framework
A unified framework for scalable computing
GPU environment management and cluster orchestration
Neural Network Compression Framework for enhanced OpenVINO
AIMET is a library that provides advanced quantization and compression
Integrate, train and manage any AI models and APIs with your database
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
State-of-the-art diffusion models for image and audio generation
Large Language Model Text Generation Inference
PyTorch library of curated Transformer models and their components
MII makes low-latency and high-throughput inference possible
A library for accelerating Transformer models on NVIDIA GPUs
LLM training code for MosaicML foundation models
Uncover insights, surface problems, monitor, and fine tune your LLM
Superduper: Integrate AI models and machine learning workflows
A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Multilingual Automatic Speech Recognition with word-level timestamps
Tensor search for humans
Sparsity-aware deep learning inference runtime for CPUs