The unofficial python package that returns response of Google Bard
Openai style api for open large language models
Run Local LLMs on Any Device. Open-source
Easiest and laziest way for building multi-agent LLMs applications
Low-latency REST API for serving text-embeddings
Optimizing inference proxy for LLMs
A high-performance ML model serving framework, offers dynamic batching
Operating LLMs in production
Large Language Model Text Generation Inference
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
A library for accelerating Transformer models on NVIDIA GPUs
Replace OpenAI GPT with another LLM in your app
Data manipulation and transformation for audio signal processing
Simplifies the local serving of AI models from any source
Unified Model Serving Framework
A GPU-accelerated library containing highly optimized building blocks
Framework that is dedicated to making neural data processing
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Run 100B+ language models at home, BitTorrent-style
Implementation of "Tree of Thoughts