Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
Single-cell analysis in Python
Port of Facebook's LLaMA model in C/C++
AI interface for tinkerers (Ollama, Haystack RAG, Python)
A high-throughput and memory-efficient inference and serving engine
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
A Pythonic framework to simplify AI service building
Everything you need to build state-of-the-art foundation models
The official Python client for the Huggingface Hub
DoWhy is a Python library for causal inference
Uplift modeling and causal inference with machine learning algorithms
Operating LLMs in production
A high-performance ML model serving framework, offers dynamic batching
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
FlashInfer: Kernel Library for LLM Serving
Gaussian processes in TensorFlow
Large Language Model Text Generation Inference
The unofficial python package that returns response of Google Bard
Integrate, train and manage any AI models and APIs with your database
An easy-to-use LLMs quantization package with user-friendly apis
Pytorch domain library for recommendation systems
Official inference library for Mistral models
Efficient few-shot learning with Sentence Transformers
Neural Network Compression Framework for enhanced OpenVINO