Gemini Robotics-ER 1.6
Gemini Robotics-ER 1.6 is a family of AI models developed by Google DeepMind to bring advanced multimodal intelligence into the physical world by enabling robots to perceive, reason, and act in real-world environments. Built on the Gemini 2.0 foundation, it extends traditional AI capabilities by adding physical action as an output modality, allowing robots to interpret visual input and natural language instructions and convert them directly into motor commands to complete tasks. It includes a vision-language-action model that processes images and instructions to execute tasks, as well as a complementary embodied reasoning model (Gemini Robotics-ER) that specializes in spatial understanding, planning, and decision-making within physical environments. These models enable robots to generalize across new situations, objects, and environments, allowing them to perform complex, multi-step tasks even if they were not explicitly trained for them.
Learn more
HunyuanOCR
Tencent Hunyuan is a large-scale, multimodal AI model family developed by Tencent that spans text, image, video, and 3D modalities, designed for general-purpose AI tasks like content generation, visual reasoning, and business automation. Its model lineup includes variants optimized for natural language understanding, multimodal vision-language comprehension (e.g., image & video understanding), text-to-image creation, video generation, and 3D content generation. Hunyuan models leverage a mixture-of-experts architecture and other innovations (like hybrid “mamba-transformer” designs) to deliver strong performance on reasoning, long-context understanding, cross-modal tasks, and efficient inference. For example, the vision-language model Hunyuan-Vision-1.5 supports “thinking-on-image”, enabling deep multimodal understanding and reasoning on images, video frames, diagrams, or spatial data.
Learn more
Janus-Pro-7B
Janus-Pro-7B is an innovative open-source multimodal AI model from DeepSeek, designed to excel in both understanding and generating content across text, images, and videos. It leverages a unique autoregressive architecture with separate pathways for visual encoding, enabling high performance in tasks ranging from text-to-image generation to complex visual comprehension. This model outperforms competitors like DALL-E 3 and Stable Diffusion in various benchmarks, offering scalability with versions from 1 billion to 7 billion parameters. Licensed under the MIT License, Janus-Pro-7B is freely available for both academic and commercial use, providing a significant leap in AI capabilities while being accessible on major operating systems like Linux, MacOS, and Windows through Docker.
Learn more
GLM-Image
GLM-Image is a next-generation, open source image generation model developed by Z.ai, designed to combine deep language understanding with high-fidelity visual synthesis. Unlike traditional diffusion-only models, it uses a hybrid architecture that integrates an autoregressive language model with a diffusion decoder, enabling it to first reason about the structure, meaning, and relationships within a prompt before generating the image itself. This approach allows GLM-Image to excel in scenarios that require precise semantic control, such as generating infographics, presentation slides, posters, and diagrams with accurate embedded text and complex layouts. With a total of around 16 billion parameters, the model achieves strong performance in rendering readable, correctly placed text within images, an area where many image models struggle, while maintaining detailed visual quality and consistency.
Learn more