A high-throughput and memory-efficient inference and serving engine
Phi-3.5 for Mac: Locally-run Vision and Language Models
Run Local LLMs on Any Device. Open-source
The official Python client for the Huggingface Hub
A unified framework for scalable computing
Large Language Model Text Generation Inference
FlashInfer: Kernel Library for LLM Serving
Single-cell analysis in Python
Everything you need to build state-of-the-art foundation models
Ready-to-use OCR with 80+ supported languages
Operating LLMs in production
Neural Network Compression Framework for enhanced OpenVINO
Efficient few-shot learning with Sentence Transformers
Library for OCR-related tasks powered by Deep Learning
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Data manipulation and transformation for audio signal processing
Libraries for applying sparsification recipes to neural networks
Gaussian processes in TensorFlow
A Pythonic framework to simplify AI service building
Uncover insights, surface problems, monitor, and fine tune your LLM
Uplift modeling and causal inference with machine learning algorithms
State-of-the-art Parameter-Efficient Fine-Tuning
A library for accelerating Transformer models on NVIDIA GPUs
Trainable models and NN optimization tools
PyTorch library of curated Transformer models and their components