Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
User-friendly AI Interface
ONNX Runtime: cross-platform, high performance ML inferencing
A high-throughput and memory-efficient inference and serving engine
The free, Open Source alternative to OpenAI, Claude and others
OpenVINO™ Toolkit repository
Protect and discover secrets using Gitleaks
High-performance neural network inference framework for mobile
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
Connect home devices into a powerful cluster to accelerate LLM
The official Python client for the Huggingface Hub
Bayesian inference with probabilistic programming
LLM.swift is a simple and readable library
State-of-the-art diffusion models for image and audio generation
Standardized Serverless ML Inference Platform on Kubernetes
A RWKV management and startup tool, full automation, only 8MB
Everything you need to build state-of-the-art foundation models
GPU environment management and cluster orchestration
Operating LLMs in production
Private Open AI on Kubernetes
Data manipulation and transformation for audio signal processing
Training and deploying machine learning models on Amazon SageMaker
A Unified Library for Parameter-Efficient Learning