A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Self-learning data agent that grounds its answers in layers of content
Fast, powerful, git-native ticket tracking in a single bash script
Chinese Llama-3 LLMs) developed from Meta Llama 3
A Model Context Protocol server for searching and analyzing arXiv
Refer and Ground Anything Anywhere at Any Granularity
TorchMultimodal is a PyTorch library
ICLR2024 Spotlight: curation/training code, metadata, distribution
Official implementation of DreamCraft3D
Chuyển đổi văn bản thành giọng nói không giới hạn
Context data platform for building observable, self-learning AI agents
Language modeling in a sentence representation space
Super Tiny Icons are miniscule SVG versions of your favourite website
This repository provides an advanced RAG
Build cross-modal and multimodal applications on the cloud
Towards Real-World Vision-Language Understanding
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
LLM-based agent for general purpose software engineering tasks
Open-source framework for intelligent speech interaction
Chat & pretrained large vision language model
Aligns tokens in two versions of a text with differing tokenization.
An advanced paper search agent powered by large language models
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Virtual AI anchor that combines state-of-the-art technology
Data loaders and abstractions for text and NLP