Optimizing inference proxy for LLMs
Easiest and laziest way for building multi-agent LLMs applications
Replace OpenAI GPT with another LLM in your app
Integrate, train and manage any AI models and APIs with your database
Training and deploying machine learning models on Amazon SageMaker
Openai style api for open large language models
Multilingual Automatic Speech Recognition with word-level timestamps
Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion
Run Local LLMs on Any Device. Open-source
Open platform for training, serving, and evaluating language models
Tensor search for humans
Single-cell analysis in Python
Ready-to-use OCR with 80+ supported languages
A high-throughput and memory-efficient inference and serving engine
Everything you need to build state-of-the-art foundation models
Operating LLMs in production
DoWhy is a Python library for causal inference
The official Python client for the Huggingface Hub
FlashInfer: Kernel Library for LLM Serving
Gaussian processes in TensorFlow
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Adversarial Robustness Toolbox (ART) - Python Library for ML security
The unofficial python package that returns response of Google Bard
Uplift modeling and causal inference with machine learning algorithms