This repository provides an advanced RAG
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible
Models for the spaCy Natural Language Processing (NLP) library
DoWhy is a Python library for causal inference
Set of tools to assess and improve LLM security
Database system for building simpler and faster AI-powered application
A modular, primitive-first, python-first PyTorch library
Structured outputs for llms
Probabilistic reasoning and statistical analysis in TensorFlow
ContextGem: Effortless LLM extraction from documents
Chat & pretrained large audio language model proposed by Alibaba Cloud
Renderer for the harmony response format to be used with gpt-oss
A game theoretic approach to explain the output of ml models
Machine Learning Systems: Design and Implementation
Model Context Protocol server that integrates AgentQL's data
FlashInfer: Kernel Library for LLM Serving
Recognition and resolution of numbers, units, date/time, etc.
A very simple framework for state-of-the-art NLP
PyTorch library of curated Transformer models and their components
Neural Search
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Capable of understanding text, audio, vision, video
Beyond the Imitation Game collaborative benchmark for measuring
Evals is a framework for evaluating LLMs and LLM systems
LLM training code for MosaicML foundation models