Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
A library for accelerating Transformer models on NVIDIA GPUs
Library for OCR-related tasks powered by Deep Learning
GPU environment management and cluster orchestration
Deep learning optimization library: makes distributed training easy
Everything you need to build state-of-the-art foundation models
State-of-the-art diffusion models for image and audio generation
The official Python client for the Huggingface Hub
Training and deploying machine learning models on Amazon SageMaker
Standardized Serverless ML Inference Platform on Kubernetes
Replace OpenAI GPT with another LLM in your app
Bring the notion of Model-as-a-Service to life
The Triton Inference Server provides an optimized cloud
AIMET is a library that provides advanced quantization and compression
Neural Network Compression Framework for enhanced OpenVINO
20+ high-performance LLMs with recipes to pretrain, finetune at scale
A Pythonic framework to simplify AI service building
Operating LLMs in production
Multilingual Automatic Speech Recognition with word-level timestamps
A set of Docker images for training and serving models in TensorFlow
Libraries for applying sparsification recipes to neural networks
Gaussian processes in TensorFlow
Single-cell analysis in Python