DoWhy is a Python library for causal inference
Large Language Model Text Generation Inference
Data manipulation and transformation for audio signal processing
Uplift modeling and causal inference with machine learning algorithms
Efficient few-shot learning with Sentence Transformers
FlashInfer: Kernel Library for LLM Serving
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
A library to communicate with ChatGPT, Claude, Copilot, Gemini
State-of-the-art diffusion models for image and audio generation
A high-performance ML model serving framework, offers dynamic batching
A Unified Library for Parameter-Efficient Learning
Uncover insights, surface problems, monitor, and fine tune your LLM
Trainable models and NN optimization tools
Probabilistic reasoning and statistical analysis in TensorFlow
Multilingual Automatic Speech Recognition with word-level timestamps
OpenMMLab Model Deployment Framework
Easy-to-use deep learning framework with 3 key features
Optimizing inference proxy for LLMs
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
A lightweight vision library for performing large object detection
PyTorch extensions for fast R&D prototyping and Kaggle farming
Unified Model Serving Framework
Framework that is dedicated to making neural data processing
Official inference library for Mistral models