User-friendly AI Interface
Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
ONNX Runtime: cross-platform, high performance ML inferencing
A high-throughput and memory-efficient inference and serving engine
Self-hosted, community-driven, local OpenAI compatible API
High-performance neural network inference framework for mobile
C++ library for high performance inference on NVIDIA GPUs
Open-Source AI Camera. Empower any camera/CCTV
OpenVINO™ Toolkit repository
Open standard for machine learning interoperability
Uncover insights, surface problems, monitor, and fine tune your LLM
MNN is a blazing fast, lightweight deep learning framework
A scalable inference server for models optimized with OpenVINO
Tensor search for humans
Everything you need to build state-of-the-art foundation models
Protect and discover secrets using Gitleaks
The official Python client for the Huggingface Hub
A high-performance ML model serving framework, offers dynamic batching
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
LLMs as Copilots for Theorem Proving in Lean
FlashInfer: Kernel Library for LLM Serving
A RWKV management and startup tool, full automation, only 8MB