GPT-4o mini
A small model with superior textual intelligence and multimodal reasoning.
GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots). Today, GPT-4o mini supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future. The model has a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-4o, handling non-English text is now even more cost effective.
Learn more
LLaVA
LLaVA (Large Language-and-Vision Assistant) is an innovative multimodal model that integrates a vision encoder with the Vicuna language model to facilitate comprehensive visual and language understanding. Through end-to-end training, LLaVA exhibits impressive chat capabilities, emulating the multimodal functionalities of models like GPT-4. Notably, LLaVA-1.5 has achieved state-of-the-art performance across 11 benchmarks, utilizing publicly available data and completing training in approximately one day on a single 8-A100 node, surpassing methods that rely on billion-scale datasets. The development of LLaVA involved the creation of a multimodal instruction-following dataset, generated using language-only GPT-4. This dataset comprises 158,000 unique language-image instruction-following samples, including conversations, detailed descriptions, and complex reasoning tasks. This data has been instrumental in training LLaVA to perform a wide array of visual and language tasks effectively.
Learn more
Nemotron 3 Nano Omni
NVIDIA Nemotron 3 Nano Omni is an open, omni-modal foundation model designed to unify perception and reasoning across text, images, audio, video, and documents within a single efficient architecture. It eliminates the need for separate models for each modality, reducing inference latency, orchestration complexity, and cost while maintaining consistent cross-modal context. It is purpose-built for agentic AI systems, acting as a perception and context sub-agent that gives larger AI agents the ability to “see, hear, and read” in real time across screens, recordings, and structured or unstructured data. It supports advanced multimodal reasoning tasks such as document understanding, speech recognition, long audio-video analysis, and computer-use workflows, enabling agents to interpret dynamic interfaces and complex environments. Built with a hybrid architecture optimized for long context and throughput, it can process large inputs like multi-page documents.
Learn more
Grok 4.20
Grok 4.20 is an advanced artificial intelligence model developed by xAI to elevate reasoning and natural language understanding. Built on the high-performance Colossus supercomputer, it is engineered for speed, scale, and accuracy. Grok 4.20 processes multimodal inputs such as text and images, with video support planned for future releases. The model excels in scientific, technical, and linguistic tasks, delivering highly precise and context-aware responses. Its architecture supports deep reasoning and sophisticated problem-solving capabilities. Enhanced moderation improves output reliability and reduces bias compared to earlier versions. Overall, Grok 4.20 represents a significant step toward more human-like AI reasoning and interpretation.
Learn more