Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
Single-cell analysis in Python
Port of Facebook's LLaMA model in C/C++
AI interface for tinkerers (Ollama, Haystack RAG, Python)
A high-throughput and memory-efficient inference and serving engine
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
A Pythonic framework to simplify AI service building
DoWhy is a Python library for causal inference
Everything you need to build state-of-the-art foundation models
The official Python client for the Huggingface Hub
Uplift modeling and causal inference with machine learning algorithms
A high-performance ML model serving framework, offers dynamic batching
Operating LLMs in production
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
The unofficial python package that returns response of Google Bard
Integrate, train and manage any AI models and APIs with your database
Large Language Model Text Generation Inference
Gaussian processes in TensorFlow
Pytorch domain library for recommendation systems
Trainable models and NN optimization tools
Superduper: Integrate AI models and machine learning workflows
An easy-to-use LLMs quantization package with user-friendly apis
Official inference library for Mistral models