Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
Single-cell analysis in Python
Ready-to-use OCR with 80+ supported languages
A high-throughput and memory-efficient inference and serving engine
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Everything you need to build state-of-the-art foundation models
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Uplift modeling and causal inference with machine learning algorithms
A high-performance ML model serving framework, offers dynamic batching
DoWhy is a Python library for causal inference
The official Python client for the Huggingface Hub
Operating LLMs in production
The Triton Inference Server provides an optimized cloud
The unofficial python package that returns response of Google Bard
Efficient few-shot learning with Sentence Transformers
Gaussian processes in TensorFlow
Large Language Model Text Generation Inference
Integrate, train and manage any AI models and APIs with your database
Pytorch domain library for recommendation systems
An easy-to-use LLMs quantization package with user-friendly apis
Superduper: Integrate AI models and machine learning workflows
Data manipulation and transformation for audio signal processing
PyTorch library of curated Transformer models and their components
FlashInfer: Kernel Library for LLM Serving