A set of Docker images for training and serving models in TensorFlow
Superduper: Integrate AI models and machine learning workflows
Standardized Serverless ML Inference Platform on Kubernetes
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Data manipulation and transformation for audio signal processing
Phi-3.5 for Mac: Locally-run Vision and Language Models
A Unified Library for Parameter-Efficient Learning
State-of-the-art Parameter-Efficient Fine-Tuning
Deep learning optimization library: makes distributed training easy
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Libraries for applying sparsification recipes to neural networks
Gaussian processes in TensorFlow
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
Large Language Model Text Generation Inference
Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers
Pytorch domain library for recommendation systems
Official inference library for Mistral models
A Pythonic framework to simplify AI service building
Uplift modeling and causal inference with machine learning algorithms
DoWhy is a Python library for causal inference
Simplifies the local serving of AI models from any source
Python Package for ML-Based Heterogeneous Treatment Effects Estimation