Qwen2.5-VL is the multimodal large language model series
Deep learning optimization library: makes distributed training easy
Easy-to-use deep learning framework with 3 key features
Free, high-quality text-to-speech API endpoint to replace OpenAI
Swirl queries any number of data sources with APIs
Building an Intelligent Agent from Scratch
Run LLM prompts from your shell
An end-to-end Data Scientist
Marrying Grounding DINO with Segment Anything & Stable Diffusion
Fast-stable-diffusion + DreamBooth
Ultimate meta-skill for generating best-in-class Claude Code skills
Multi-agent autonomous startup system for Claude Code
Persistent context and multi-instance coordination
Multimodal embedding and reranking models built on Qwen3-VL
A New Axis of Sparsity for Large Language Models
Improve human sleep through scientifically
LLM training in simple, raw C/CUDA
Less Code, Lower Barrier, Faster Deployment
A simple, secure MCP-to-OpenAPI proxy server
Code release for Cut and Learn for Unsupervised Object Detection
VMZ: Model Zoo for Video Modeling
Official implementation of Watermark Anything with Localized Messages
High-resolution models for human tasks
Code for the paper "Evaluating Large Language Models Trained on Code"
Extensible AGI Framework