Code for the paper "Evaluating Large Language Models Trained on Code"
Multilingual sentence & image embeddings with BERT
LLM abstractions that aren't obstructions
Framework that is dedicated to making neural data processing
A state-of-the-art open visual language model
Open-weight, large-scale hybrid-attention reasoning model
Qwen3-omni is a natively end-to-end, omni-modal LLM
Set of tools to assess and improve LLM security
BISHENG is an open LLM devops platform for next generation apps
Ray Aviary - evaluate multiple LLMs easily
Open source libraries and APIs to build custom preprocessing pipelines
Inference Llama 2 in one file of pure C
LLM training code for MosaicML foundation models
Leveraging BERT and c-TF-IDF to create easily interpretable topics
PyTorch library of curated Transformer models and their components
State-of-the-art Parameter-Efficient Fine-Tuning
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Swirl queries any number of data sources with APIs
Get started w/ building Fullstack Agents using Gemini 2.5 & LangGraph
Chat & pretrained large vision language model
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Qwen2.5-VL is the multimodal large language model series
Training Large Language Model to Reason in a Continuous Latent Space
Ling is a MoE LLM provided and open-sourced by InclusionAI
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible