Run Local LLMs on Any Device. Open-source
A Pythonic framework to simplify AI service building
Phi-3.5 for Mac: Locally-run Vision and Language Models
A library for accelerating Transformer models on NVIDIA GPUs
Simplifies the local serving of AI models from any source
Standardized Serverless ML Inference Platform on Kubernetes
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Libraries for applying sparsification recipes to neural networks
Data manipulation and transformation for audio signal processing
A Unified Library for Parameter-Efficient Learning
Deep learning optimization library: makes distributed training easy
DoWhy is a Python library for causal inference
LLM training code for MosaicML foundation models
An MLOps framework to package, deploy, monitor and manage models
Tensor search for humans
A unified framework for scalable computing
Powering Amazon custom machine learning chips
High quality, fast, modular reference implementation of SSD in PyTorch
A computer vision framework to create and deploy apps in minutes
Database system for building simpler and faster AI-powered application
Run 100B+ language models at home, BitTorrent-style
Training & Implementation of chatbots leveraging GPT-like architecture
Deploy a ML inference service on a budget in 10 lines of code