Large-scale autoregressive pixel model for image generation by OpenAI
A library for Multilingual Unsupervised or Supervised word Embeddings
Code for the paper "Improved Techniques for Training GANs"
Code for reproducing key results in the paper
Code for "Image Generation from Scene Graphs", Johnson et al, CVPR 201
Open-source code agent designed for Lean 4
Open language model developed by NVIDIA as part of Nemotron-3 family
Large language model developed and released by NVIDIA
685B model with improved agents and consistency
Self-evolving AI model for agents, coding, and complex workflows
Dense multimodal Qwen model for coding, agents, and long context
OpenAI’s open-weight 120B model optimized for reasoning and tooling
Multimodal agent model for coding, orchestration, and autonomy
Open multimodal model for coding, agents, and long-context tasks
Flagship MoE model for advanced reasoning, coding, and agents
Efficient 13B MoE language model with long context and reasoning modes
Qwen2.5-VL-3B-Instruct: Multimodal model for chat, vision & video
Compact hybrid reasoning language model for intelligent responses
FP8 Qwen model for efficient multimodal coding and agent tasks
Agentic 123B coding model optimized for large-scale engineering
High-efficiency reasoning and agentic intelligence model
OpenAI’s compact 20B open model for fast, agentic, and local use
JetBrains’ 4B parameter code model for completions
Multimodal 7B model for image, video, and text understanding tasks
VaultGemma: 1B DP-trained Gemma variant for private NLP tasks