Run Local LLMs on Any Device. Open-source
MII makes low-latency and high-throughput inference possible
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Superduper: Integrate AI models and machine learning workflows
DoWhy is a Python library for causal inference
Official inference library for Mistral models
The unofficial python package that returns response of Google Bard
Database system for building simpler and faster AI-powered application
Open-Source AI Camera. Empower any camera/CCTV
The Triton Inference Server provides an optimized cloud
LLMFlows - Simple, Explicit and Transparent LLM Apps
Prem provides a unified environment to develop AI applications
Tensor search for humans
A library to communicate with ChatGPT, Claude, Copilot, Gemini
A toolkit to optimize ML models for deployment for Keras & TensorFlow
A computer vision framework to create and deploy apps in minutes