Run Local LLMs on Any Device. Open-source
Easy-to-use Speech Toolkit including Self-Supervised Learning model
DoWhy is a Python library for causal inference
The Triton Inference Server provides an optimized cloud
MII makes low-latency and high-throughput inference possible
Superduper: Integrate AI models and machine learning workflows
The unofficial python package that returns response of Google Bard
Tensor search for humans
A toolkit to optimize ML models for deployment for Keras & TensorFlow
A computer vision framework to create and deploy apps in minutes
Database system for building simpler and faster AI-powered application
LLMFlows - Simple, Explicit and Transparent LLM Apps