User-friendly AI Interface
Run Local LLMs on Any Device. Open-source
OpenVINO™ Toolkit repository
Standardized Serverless ML Inference Platform on Kubernetes
The free, Open Source alternative to OpenAI, Claude and others
Port of OpenAI's Whisper model in C/C++
Simplifies the local serving of AI models from any source
A Pythonic framework to simplify AI service building
Phi-3.5 for Mac: Locally-run Vision and Language Models
Port of Facebook's LLaMA model in C/C++
Data manipulation and transformation for audio signal processing
A scalable inference server for models optimized with OpenVINO
A Unified Library for Parameter-Efficient Learning
A RWKV management and startup tool, full automation, only 8MB
Libraries for applying sparsification recipes to neural networks
lightweight, standalone C++ inference engine for Google's Gemma models
AI interface for tinkerers (Ollama, Haystack RAG, Python)
20+ high-performance LLMs with recipes to pretrain, finetune at scale
ONNX Runtime: cross-platform, high performance ML inferencing
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
DoWhy is a Python library for causal inference
LLM training code for MosaicML foundation models
A high-throughput and memory-efficient inference and serving engine
An Open-Source Programming Framework for Agentic AI
Tensor search for humans