Trainable models and NN optimization tools
Official inference library for Mistral models
Neural Network Compression Framework for enhanced OpenVINO
GPU environment management and cluster orchestration
Gaussian processes in TensorFlow
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Efficient few-shot learning with Sentence Transformers
MII makes low-latency and high-throughput inference possible
Simplifies the local serving of AI models from any source
Multilingual Automatic Speech Recognition with word-level timestamps
Uncover insights, surface problems, monitor, and fine tune your LLM
AIMET is a library that provides advanced quantization and compression
PyTorch library of curated Transformer models and their components
Phi-3.5 for Mac: Locally-run Vision and Language Models
Standardized Serverless ML Inference Platform on Kubernetes
Library for OCR-related tasks powered by Deep Learning
Data manipulation and transformation for audio signal processing
Pytorch domain library for recommendation systems
Bring the notion of Model-as-a-Service to life
State-of-the-art Parameter-Efficient Fine-Tuning
A set of Docker images for training and serving models in TensorFlow
Low-latency REST API for serving text-embeddings
Sparsity-aware deep learning inference runtime for CPUs
Replace OpenAI GPT with another LLM in your app
LLM training code for MosaicML foundation models