FlashInfer: Kernel Library for LLM Serving
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Replace OpenAI GPT with another LLM in your app
Everything you need to build state-of-the-art foundation models
State-of-the-art diffusion models for image and audio generation
A Pythonic framework to simplify AI service building
Unified Model Serving Framework
Official inference library for Mistral models
AIMET is a library that provides advanced quantization and compression
Create HTML profiling reports from pandas DataFrame objects
Low-latency REST API for serving text-embeddings
Operating LLMs in production
Single-cell analysis in Python
Large Language Model Text Generation Inference
Efficient few-shot learning with Sentence Transformers
Data manipulation and transformation for audio signal processing
GPU environment management and cluster orchestration
PyTorch library of curated Transformer models and their components
Multilingual Automatic Speech Recognition with word-level timestamps
Training and deploying machine learning models on Amazon SageMaker
Visual Instruction Tuning: Large Language-and-Vision Assistant
Phi-3.5 for Mac: Locally-run Vision and Language Models
Superduper: Integrate AI models and machine learning workflows
Python Package for ML-Based Heterogeneous Treatment Effects Estimation