ONNX Runtime: cross-platform, high performance ML inferencing
High-performance neural network inference framework for mobile
AIMET is a library that provides advanced quantization and compression
Powering Amazon custom machine learning chips
A set of Docker images for training and serving models in TensorFlow
Replace OpenAI GPT with another LLM in your app
LLM training code for MosaicML foundation models
Library for serving Transformers models on Amazon SageMaker
PArallel Distributed Deep LEarning: Machine Learning Framework
C++ library for high performance inference on NVIDIA GPUs
Bayesian inference with probabilistic programming
Integrate, train and manage any AI models and APIs with your database
OpenMLDB is an open-source machine learning database
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
Database system for building simpler and faster AI-powered application
Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion
High quality, fast, modular reference implementation of SSD in PyTorch
LLMs and Machine Learning done easily
An Open-Source Programming Framework for Agentic AI
Open standard for machine learning interoperability
Standardized Serverless ML Inference Platform on Kubernetes
LLMs as Copilots for Theorem Proving in Lean
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Build Production-ready Agentic Workflow with Natural Language
On-device AI across mobile, embedded and edge for PyTorch