Official inference library for Mistral models
Framework that is dedicated to making neural data processing
A set of Docker images for training and serving models in TensorFlow
A library for accelerating Transformer models on NVIDIA GPUs
Standardized Serverless ML Inference Platform on Kubernetes
20+ high-performance LLMs with recipes to pretrain, finetune at scale
GPU environment management and cluster orchestration
A Unified Library for Parameter-Efficient Learning
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
LLM training code for MosaicML foundation models
PyTorch library of curated Transformer models and their components
State-of-the-art Parameter-Efficient Fine-Tuning
Open platform for training, serving, and evaluating language models
OpenMMLab Model Deployment Framework
A toolkit to optimize ML models for deployment for Keras & TensorFlow
Low-latency REST API for serving text-embeddings
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
An MLOps framework to package, deploy, monitor and manage models
A lightweight vision library for performing large object detection
Create HTML profiling reports from pandas DataFrame objects
Library for serving Transformers models on Amazon SageMaker
Deep learning optimization library: makes distributed training easy
Fast inference engine for Transformer models
Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion
Tensor search for humans