Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Single-cell analysis in Python
Port of Facebook's LLaMA model in C/C++
Ready-to-use OCR with 80+ supported languages
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
DoWhy is a Python library for causal inference
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Uplift modeling and causal inference with machine learning algorithms
Everything you need to build state-of-the-art foundation models
Operating LLMs in production
Gaussian processes in TensorFlow
The unofficial python package that returns response of Google Bard
The official Python client for the Huggingface Hub
Database system for building simpler and faster AI-powered application
Easiest and laziest way for building multi-agent LLMs applications
Adversarial Robustness Toolbox (ART) - Python Library for ML security
A Pythonic framework to simplify AI service building
Framework that is dedicated to making neural data processing
MII makes low-latency and high-throughput inference possible
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Trainable models and NN optimization tools
The Triton Inference Server provides an optimized cloud
Integrate, train and manage any AI models and APIs with your database