Training and deploying machine learning models on Amazon SageMaker
Powering Amazon custom machine learning chips
Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
A high-throughput and memory-efficient inference and serving engine
Single-cell analysis in Python
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Everything you need to build state-of-the-art foundation models
DoWhy is a Python library for causal inference
Uplift modeling and causal inference with machine learning algorithms
Gaussian processes in TensorFlow
Operating LLMs in production
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
AI interface for tinkerers (Ollama, Haystack RAG, Python)
The official Python client for the Huggingface Hub
The unofficial python package that returns response of Google Bard
Easiest and laziest way for building multi-agent LLMs applications
Adversarial Robustness Toolbox (ART) - Python Library for ML security
A Pythonic framework to simplify AI service building
Database system for building simpler and faster AI-powered application
Framework that is dedicated to making neural data processing
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
MII makes low-latency and high-throughput inference possible
An easy-to-use LLMs quantization package with user-friendly apis
Trainable models and NN optimization tools