ONNX Runtime: cross-platform, high performance ML inferencing
User-friendly AI Interface
High-performance neural network inference framework for mobile
Run Local LLMs on Any Device. Open-source
The free, Open Source alternative to OpenAI, Claude and others
Bolt is a deep learning library with high performance
OpenMLDB is an open-source machine learning database
Standardized Serverless ML Inference Platform on Kubernetes
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
Unified Model Serving Framework
An MLOps framework to package, deploy, monitor and manage models
PArallel Distributed Deep LEarning: Machine Learning Framework
The official Python client for the Huggingface Hub
Private Open AI on Kubernetes
Operating LLMs in production
Deep Learning API and Server in C++14 support for Caffe, PyTorch
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Easy-to-use deep learning framework with 3 key features
Run serverless GPU workloads with fast cold starts on bare-metal
LLM training code for MosaicML foundation models
Superduper: Integrate AI models and machine learning workflows
OpenMMLab Model Deployment Framework
The deep learning toolkit for speech-to-text
Guide to deploying deep-learning inference networks
Uniform deep learning inference framework for mobile