A Pythonic framework to simplify AI service building
A unified framework for scalable computing
Ready-to-use OCR with 80+ supported languages
Single-cell analysis in Python
Library for OCR-related tasks powered by Deep Learning
GPU environment management and cluster orchestration
Large Language Model Text Generation Inference
Run Local LLMs on Any Device. Open-source
LLM training code for MosaicML foundation models
Powering Amazon custom machine learning chips
PyTorch library of curated Transformer models and their components
A library for accelerating Transformer models on NVIDIA GPUs
DoWhy is a Python library for causal inference
Bring the notion of Model-as-a-Service to life
State-of-the-art diffusion models for image and audio generation
A high-performance ML model serving framework, offers dynamic batching
The Triton Inference Server provides an optimized cloud
Openai style api for open large language models
Trainable models and NN optimization tools
Uplift modeling and causal inference with machine learning algorithms
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Build your chatbot within minutes on your favorite device
Unified Model Serving Framework
Replace OpenAI GPT with another LLM in your app
Library for serving Transformers models on Amazon SageMaker