Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
Simple, Pythonic building blocks to evaluate LLM applications
Python bindings for the Transformer models implemented in C/C++
LLM based data scientist, AI native data application
Tools like web browser, computer access and code runner for LLMs
Serving LangChain LLM apps automagically with FastApi
Label, clean and enrich text datasets with LLMs
LLM
The unofficial python package that returns response of Google Bard
Operating LLMs in production
Large-scale Self-supervised Pre-training Across Tasks, Languages, etc.
Database system for building simpler and faster AI-powered application
User toolkit for analyzing and interfacing with Large Language Models
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Train a 26M-parameter GPT from scratch in just 2h
Phi-3.5 for Mac: Locally-run Vision and Language Models
File Parser optimised for LLM Ingestion with no loss
Integrating LLMs into structured NLP pipelines
PyTorch library of curated Transformer models and their components
Seamlessly integrate LLMs into scikit-learn
State-of-the-art Parameter-Efficient Fine-Tuning
A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Low-latency REST API for serving text-embeddings
AI agent that streamlines the entire process of data analysis