The unofficial python package that returns response of Google Bard
Openai style api for open large language models
Run Local LLMs on Any Device. Open-source
The Triton Inference Server provides an optimized cloud
Easiest and laziest way for building multi-agent LLMs applications
Operating LLMs in production
Low-latency REST API for serving text-embeddings
Optimizing inference proxy for LLMs
A high-performance ML model serving framework, offers dynamic batching
Large Language Model Text Generation Inference
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
A library for accelerating Transformer models on NVIDIA GPUs
Library for OCR-related tasks powered by Deep Learning
Data manipulation and transformation for audio signal processing
Replace OpenAI GPT with another LLM in your app
Simplifies the local serving of AI models from any source
Unified Model Serving Framework
Bring the notion of Model-as-a-Service to life
A GPU-accelerated library containing highly optimized building blocks
Framework that is dedicated to making neural data processing
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Run 100B+ language models at home, BitTorrent-style
Implementation of "Tree of Thoughts