Trainable models and NN optimization tools
Probabilistic reasoning and statistical analysis in TensorFlow
Build your chatbot within minutes on your favorite device
Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers
PyTorch extensions for fast R&D prototyping and Kaggle farming
Official inference library for Mistral models
Open-source tool designed to enhance the efficiency of workloads
Framework that is dedicated to making neural data processing
A set of Docker images for training and serving models in TensorFlow
A library for accelerating Transformer models on NVIDIA GPUs
Standardized Serverless ML Inference Platform on Kubernetes
20+ high-performance LLMs with recipes to pretrain, finetune at scale
GPU environment management and cluster orchestration
A Unified Library for Parameter-Efficient Learning
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
LLM training code for MosaicML foundation models
PyTorch library of curated Transformer models and their components
State-of-the-art Parameter-Efficient Fine-Tuning
Open platform for training, serving, and evaluating language models
Easy-to-use Speech Toolkit including Self-Supervised Learning model
OpenMMLab Model Deployment Framework
A toolkit to optimize ML models for deployment for Keras & TensorFlow
Low-latency REST API for serving text-embeddings
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction