Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Neural Network Compression Framework for enhanced OpenVINO
A Unified Library for Parameter-Efficient Learning
PyTorch library of curated Transformer models and their components
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
The unofficial python package that returns response of Google Bard
Open platform for training, serving, and evaluating language models
A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Probabilistic reasoning and statistical analysis in TensorFlow
LLMFlows - Simple, Explicit and Transparent LLM Apps
Framework for Accelerating LLM Generation with Multiple Decoding Heads
Low-latency REST API for serving text-embeddings
Tensor search for humans
A general-purpose probabilistic programming system
Implementation of "Tree of Thoughts
Implementation of model parallel autoregressive transformers on GPUs
A computer vision framework to create and deploy apps in minutes
The deep learning toolkit for speech-to-text
CPU/GPU inference server for Hugging Face transformer models