Fast Forward Computer Vision (and other ML workloads!)
Fault-tolerant, highly scalable GPU orchestration
Fast, flexible and easy to use probabilistic modelling in Python
State-of-the-art Parameter-Efficient Fine-Tuning
The unified and scalable ML library for large-scale training
Open platform for training, serving, and evaluating language models
Run the Stable Diffusion releases in a Docker container
Differentiable SDE solvers with GPU support and efficient sensitivity
High quality, fast, modular reference implementation of SSD in PyTorch
Openai style api for open large language models
Large Language Model Text Generation Inference
ReFT: Representation Finetuning for Language Models
Tensor Learning in Python
Math OCR model that outputs LaTeX and markdown
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
Low-latency REST API for serving text-embeddings
Standardized Serverless ML Inference Platform on Kubernetes
Chinese LLaMA & Alpaca large language model + local CPU/GPU training
MII makes low-latency and high-throughput inference possible
Basaran, an open-source alternative to the OpenAI text completion API
Making large AI models cheaper, faster and more accessible
Open Source Differentiable Computer Vision Library
A set of Docker images for training and serving models in TensorFlow
2D and 3D Face alignment library build using pytorch
Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML