Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Port of Facebook's LLaMA model in C/C++
Single-cell analysis in Python
The official Python client for the Huggingface Hub
Ready-to-use OCR with 80+ supported languages
Everything you need to build state-of-the-art foundation models
FlashInfer: Kernel Library for LLM Serving
Uplift modeling and causal inference with machine learning algorithms
A Pythonic framework to simplify AI service building
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
AIMET is a library that provides advanced quantization and compression
Gaussian processes in TensorFlow
A unified framework for scalable computing
DoWhy is a Python library for causal inference
AI interface for tinkerers (Ollama, Haystack RAG, Python)
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
State-of-the-art Parameter-Efficient Fine-Tuning
Superduper: Integrate AI models and machine learning workflows
Adversarial Robustness Toolbox (ART) - Python Library for ML security
GPU environment management and cluster orchestration
The Triton Inference Server provides an optimized cloud
Operating LLMs in production
Optimizing inference proxy for LLMs