Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers
Framework that is dedicated to making neural data processing
PyTorch extensions for fast R&D prototyping and Kaggle farming
Official inference library for Mistral models
A set of Docker images for training and serving models in TensorFlow
A library for accelerating Transformer models on NVIDIA GPUs
Standardized Serverless ML Inference Platform on Kubernetes
20+ high-performance LLMs with recipes to pretrain, finetune at scale
GPU environment management and cluster orchestration
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
LLM training code for MosaicML foundation models
PyTorch library of curated Transformer models and their components
State-of-the-art Parameter-Efficient Fine-Tuning
Easy-to-use Speech Toolkit including Self-Supervised Learning model
OpenMMLab Model Deployment Framework
A toolkit to optimize ML models for deployment for Keras & TensorFlow
Low-latency REST API for serving text-embeddings
An MLOps framework to package, deploy, monitor and manage models
A lightweight vision library for performing large object detection
Create HTML profiling reports from pandas DataFrame objects
Library for serving Transformers models on Amazon SageMaker
Deep learning optimization library: makes distributed training easy
Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion
Tensor search for humans