Official inference library for Mistral models
Library for serving Transformers models on Amazon SageMaker
C++ library for high performance inference on NVIDIA GPUs
A general-purpose probabilistic programming system
ONNX Runtime: cross-platform, high performance ML inferencing
Standardized Serverless ML Inference Platform on Kubernetes
AIMET is a library that provides advanced quantization and compression
Official inference framework for 1-bit LLMs
Easy-to-use deep learning framework with 3 key features
Unified Model Serving Framework
Neural Network Compression Framework for enhanced OpenVINO
A set of Docker images for training and serving models in TensorFlow
High-performance neural network inference framework for mobile
Powering Amazon custom machine learning chips
OpenShell is the safe, private runtime for autonomous AI agents.
The Scala 3 compiler, also known as Dotty
MNN is a blazing fast, lightweight deep learning framework
Audiocraft is a library for audio processing and generation
TypeScript-first schema validation with static type inference
Probabilistic programming in Python
Toolkit for running TensorFlow training scripts on SageMaker
An extremely fast Python type checker and language server
Protect and discover secrets using Gitleaks
The slow descent into madness
Doctrine extensions for PHPStan