ERNIE 4.5
ERNIE 4.5 is a cutting-edge conversational AI platform developed by Baidu, leveraging advanced natural language processing (NLP) models to enable highly sophisticated human-like interactions. The platform is part of Baidu’s ERNIE (Enhanced Representation through Knowledge Integration) series, which integrates multimodal capabilities, including text, image, and voice. ERNIE 4.5 enhances the ability of AI models to understand complex context and deliver more accurate, nuanced responses, making it suitable for various applications, from customer service and virtual assistants to content creation and enterprise-level automation.
Learn more
Olmo 2
Olmo 2 is a family of fully open language models developed by the Allen Institute for AI (AI2), designed to provide researchers and developers with transparent access to training data, open-source code, reproducible training recipes, and comprehensive evaluations. These models are trained on up to 5 trillion tokens and are competitive with leading open-weight models like Llama 3.1 on English academic benchmarks. Olmo 2 emphasizes training stability, implementing techniques to prevent loss spikes during long training runs, and utilizes staged training interventions during late pretraining to address capability deficiencies. The models incorporate state-of-the-art post-training methodologies from AI2's Tülu 3, resulting in the creation of Olmo 2-Instruct models. An actionable evaluation framework, the Open Language Modeling Evaluation System (OLMES), was established to guide improvements through development stages, consisting of 20 evaluation benchmarks assessing core capabilities.
Learn more
Molmo 2
Molmo 2 is a new suite of state-of-the-art open vision-language models with fully open weights, training data, and training code that extends the original Molmo family’s grounded image understanding to video and multi-image inputs, enabling advanced video understanding, pointing, tracking, dense captioning, and question-answering capabilities; all with strong spatial and temporal reasoning across frames. Molmo 2 includes three variants: an 8 billion-parameter model optimized for overall video grounding and QA, a 4 billion-parameter version designed for efficiency, and a 7 billion-parameter Olmo-backed model offering a fully open end-to-end architecture including the underlying language model. These models outperform earlier Molmo versions on core benchmarks and set new open-model high-water marks for image and video understanding tasks, often competing with substantially larger proprietary systems while training on a fraction of the data used by comparable closed models.
Learn more
Reka Flash 3
Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization.
Learn more