Showing 46 open source projects for "grounding"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 1
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    ...It includes checkpoints for chat, base, and grounding variants, plus recipes for model-parallel inference and LoRA fine-tuning. The documentation covers task prompts for general dialogue, visual grounding (box→caption, caption→box, caption+boxes), and GUI agent workflows that produce structured actions with bounding boxes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Generative AI

    Generative AI

    Sample code and notebooks for Generative AI on Google Cloud

    Generative AI is a comprehensive collection of code samples, notebooks, and demo applications designed to help developers build generative-AI workflows on the Vertex AI platform. It spans multiple modalities—text, image, audio, search (RAG/grounding) and more—showing how to integrate foundation models like the Gemini family into cloud projects. The README emphasises getting started with prompts, datasets, environments and sample apps, making it ideal for both experimentation and production-ready usage. The repository architecture is organised into folders like gemini/, search/, vision/, audio/, and rag-grounding/, which helps developers locate use cases by modality. ...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 3
    Grounded-Segment-Anything

    Grounded-Segment-Anything

    Marrying Grounding DINO with Segment Anything & Stable Diffusion

    Grounded-Segment-Anything is a research-oriented project that combines powerful open-set object detection with pixel-level segmentation and subsequent creative workflows, effectively enabling detection, segmentation, and high-level vision tasks guided by free-form text prompts. The core idea behind the project is to pair Grounding DINO — a zero-shot object detector that can locate objects described by natural language — with Segment Anything Model (SAM), which can produce detailed masks for objects once they are localized. This fusion lets users provide arbitrary text descriptions (e.g., “a cat, a bicycle, or a coffee mug”), have the detection model find relevant bounding boxes, and then use SAM to generate precise segmentation masks that isolate each object in the scene.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Cookbook (Google Gemini)

    Cookbook (Google Gemini)

    Examples and guides for using the Gemini API

    ...It highlights newly introduced features such as Gemini 2.5 models (Flash and Pro), Gemini’s native image generation, Veo for video generation, robotics-focused reasoning models, and Lyria for TTS and music generation. The Cookbook also includes tutorials on advanced API workflows such as grounding answers with external tools, batch-mode request handling, and live multimodal interactivity with LiveAPI. Designed as a hands-on resource, it helps developers quickly explore Gemini’s potential while serving as a reference for integrating cutting-edge multimodal AI into applications.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    Self-Operating Computer

    Self-Operating Computer

    A framework to enable multimodal models to operate a computer

    ...Notably, it was the first known project to implement a multimodal model capable of viewing and controlling a computer screen. The framework supports features like Optical Character Recognition (OCR) and Set-of-Mark (SoM) prompting to enhance visual grounding capabilities. It is designed to be compatible with macOS, Windows, and Linux (with X server installed), and is released under the MIT license.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 6
    Qwen3-VL

    Qwen3-VL

    Qwen3-VL, the multimodal large language model series by Alibaba Cloud

    ...Qwen3-VL is built for complex tasks such as GUI automation, multimodal coding (converting images or videos into HTML, CSS, JS, or Draw.io diagrams), long-context reasoning with support up to 1M tokens, and comprehensive video understanding. It also brings advanced perception capabilities, including spatial grounding, object recognition, OCR across 32 languages, and robust handling of challenging inputs like low-light or distorted text.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    VibeKit

    VibeKit

    Run Claude Code, Gemini, Codex in a clean, isolated sandbox

    ...Instead of treating AI models as black boxes behind simple prompts, Vibekit encourages developers to define declarative behaviors, reactive rules, and data flows that make the outputs of models part of living application logic. This can include things like dynamic content generation, live adaptation based on user interaction, and connectors to external APIs for enriched grounding. The toolkit also supports testing and local iteration, with utilities that simulate event streams and mock model responses to make development predictable.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    ML Ferret

    ML Ferret

    Refer and Ground Anything Anywhere at Any Granularity

    Ferret is Apple’s end-to-end multimodal large language model designed specifically for flexible referring and grounding: it can understand references of any granularity (boxes, points, free-form regions) and then ground open-vocabulary descriptions back onto the image. The core idea is a hybrid region representation that mixes discrete coordinates with continuous visual features, so the model can fluidly handle “any-form” referring while maintaining precise spatial localization. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Multimodal

    Multimodal

    TorchMultimodal is a PyTorch library

    ...The design emphasizes composability: you can mix and match encoder, fusion, and decoder components rather than starting from monolithic models. The repository also includes example scripts and datasets for common multimodal tasks (e.g. retrieval, visual question answering, grounding) so you can test and compare models end to end. Installation supports both CPU and CUDA, and the codebase is versioned, tested, and maintained.
    Downloads: 0 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 10
    Seamless Communication

    Seamless Communication

    Foundational Models for State-of-the-Art Speech and Text Translation

    ...The system architecture includes a real-time multimodal signal pipeline for audio, video, and sensor data, a dialog manager that can decide when to act (speak, gesture, point) or query, and a cross-modal reasoning layer that fuses perception with semantic context. The research prototype includes components for visual grounding (understanding when a user references something in view), gesture recognition and synthesis, and turn-taking mechanisms that mirror human conversational timing. Because latency and synchronization are critical, the codebase invests in asynchronous scheduling, overlap of perception and reasoning, and fast fallback responses.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Qwen-2.5-VL

    Qwen-2.5-VL

    Qwen2.5-VL is the multimodal large language model series

    Qwen2.5 is a series of large language models developed by the Qwen team at Alibaba Cloud, designed to enhance natural language understanding and generation across multiple languages. The models are available in various sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B parameters, catering to diverse computational requirements. Trained on a comprehensive dataset of up to 18 trillion tokens, Qwen2.5 models exhibit significant improvements in instruction following, long-text generation...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 12
    Agent S

    Agent S

    Agent S: an open agentic framework that uses computers like a human

    ...The latest version, Agent S3, surpasses human-level performance on the OSWorld benchmark, demonstrating state-of-the-art results in complex multi-step computer tasks. Agent S combines powerful foundation models (such as GPT-5) with grounding models like UI-TARS to translate visual inputs into precise executable actions. It supports flexible deployment via CLI, SDK, or cloud, and integrates with multiple model providers including OpenAI, Anthropic, Gemini, Azure, and Hugging Face endpoints. With optional local code execution, reflection mechanisms, and compositional planning, Agent S provides a scalable and research-driven framework for building advanced computer-use agents.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 13
    MAI-UI

    MAI-UI

    Real-World Centric Foundation GUI Agents

    ...Developed by Tongyi-MAI (Alibaba’s research initiative), the MAI-UI models are multimodal agents trained to understand user instructions and corresponding screenshots, grounding those instructions to on-screen elements and generating sequences of GUI actions such as taps, swipes, text input, and system commands. Unlike traditional UI frameworks, MAI-UI emphasizes realistic deployment by supporting agent–user interaction (clarifying ambiguous instructions), integration with external tool APIs using MCP calls, and a device–cloud collaboration mechanism that dynamically routes computation to on-device or cloud models based on task state and privacy constraints.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    GLM-V

    GLM-V

    GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning

    ...GLM-4.5V builds on the flagship GLM-4.5-Air foundation (106B parameters, 12B active), achieving state-of-the-art results on 42 benchmarks across image, video, document, GUI, and grounding tasks. It introduces hybrid training for broad-spectrum reasoning and a Thinking Mode switch to balance speed and depth of reasoning. GLM-4.1V-9B-Thinking incorporates reinforcement learning with curriculum sampling (RLCS) and Chain-of-Thought reasoning, outperforming models much larger in scale (e.g., Qwen-2.5-VL-72B) across many benchmarks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Vidi2

    Vidi2

    Large Multimodal Models for Video Understanding and Editing

    ...It’s designed to process long-form, real-world videos and answer complex queries such as “when in this clip does X happen?” or “where in the frame is object Y during that moment?” — offering temporal retrieval, spatio-temporal grounding (i.e. locating objects over time + space), and even video question answering. Vidi targets applications like intelligent video editing, automated video search, content analysis, and editing assistance, enabling users to efficiently locate relevant segments and objects in hours-long footage. The system is built with open-source release in mind, giving developers access to model code, inference scripts, and evaluation pipelines so they can reproduce research results or integrate Vidi into their own video-processing workflows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Qwen-VL

    Qwen-VL

    Chat & pretrained large vision language model

    ...Qwen-VL supports multilingual inputs and conversation (e.g. Chinese, English), and is aimed at tasks like image captioning, question answering on images (VQA, DocVQA), grounding (detecting objects or regions from textual queries), etc.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    DeepSeek VL

    DeepSeek VL

    Towards Real-World Vision-Language Understanding

    DeepSeek-VL is DeepSeek’s initial vision-language model that anchors their multimodal stack. It enables understanding and generation across visual and textual modalities—meaning it can process an image + a prompt, answer questions about images, caption, classify, or reason about visuals in context. The model is likely used internally as the visual encoder backbone for agent use cases, to ground perception in downstream tasks (e.g. answering questions about a screenshot). The repository...
    Downloads: 12 This Week
    Last Update:
    See Project
  • 18
    Langroid

    Langroid

    Harness LLMs with Multi-Agent Programming

    Given the remarkable abilities of recent Large Language Models (LLMs), there is an unprecedented opportunity to build intelligent applications powered by this transformative technology. The top question for any enterprise is: how best to harness the power of LLMs for complex applications? For technical and practical reasons, building LLM-powered applications is not as simple as throwing a task at an LLM system and expecting it to do it. Effectively leveraging LLMs at scale requires a...
    Downloads: 12 This Week
    Last Update:
    See Project
  • 19
    Kimi K2.5

    Kimi K2.5

    Moonshot's most powerful AI model

    Kimi K2.5 is Moonshot AI’s open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed vision and text tokens. Based on a 1T-parameter Mixture-of-Experts (MoE) architecture with 32B activated parameters, it integrates advanced language reasoning with strong visual understanding. K2.5 supports both “Thinking” and “Instant” modes, enabling either deep step-by-step reasoning or low-latency responses depending on the task. Designed for...
    Downloads: 52 This Week
    Last Update:
    See Project
  • 20
    LangExtract

    LangExtract

    A Python library for extracting structured information

    LangExtract is a Python library developed by Google that leverages large language models (LLMs) to extract structured information from unstructured text—such as clinical notes, research papers, or literary works—based on user-defined instructions. It is designed to transform free-form text into reliable, schema-constrained data while maintaining traceability back to the source material. Each extracted entity is precisely grounded in its original context, allowing visual inspection and...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 21
    DeepSeek VL2

    DeepSeek VL2

    Mixture-of-Experts Vision-Language Models for Advanced Multimodal

    ...While the internal architecture details are not fully documented publicly, the repo suggests that VL2 introduces enhancements over prior vision-language models (e.g. better scaling, cross-modal attention, more robust alignment) to improve grounding and multimodal understanding.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 22
    WeKnora

    WeKnora

    LLM framework for document understanding and semantic retrieval

    ...It follows the Retrieval-Augmented Generation (RAG) paradigm, where relevant document segments are retrieved and used by language models to generate accurate, context-aware responses. This approach enables the system to provide more reliable answers by grounding model reasoning in the content of uploaded documents. WeKnora is designed with a modular architecture that separates components for document processing, search strategies, and model inference, allowing developers to customize or extend different parts of the pipeline. It supports knowledge base management and conversational question answering built on top of structured and unstructured documents.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 23
    Granite TSFM

    Granite TSFM

    Foundation Models for Time Series

    ...It documents the currently supported Python versions and points users to where the core TSFM models are hosted and how to wire up service components. Issues and examples in the tracker illustrate common tasks such as slicing inference windows or using pipeline helpers that return pandas DataFrames, grounding the library in day-to-day time-series operations. The ecosystem around TSFM also includes a community cookbook of “recipes” that showcase capabilities and patterns. Overall, the repo is designed as a hands-on companion for teams adopting time-series foundation models in production-leaning settings.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    LingBot-VLA

    LingBot-VLA

    A Pragmatic VLA Foundation Model

    LingBot-VLA is an open-source Vision-Language-Action (VLA) foundational AI model designed to serve as a general “brain” for real-world robotic manipulation by grounding multimodal perception and language into actionable motions. It has been pretrained on tens of thousands of hours of real robotic interaction data across multiple robot platforms, which enables it to generalize well to diverse morphologies and tasks without needing extensive retraining on each new bot. The model aims to bridge vision, language understanding, and motor control within one unified architecture, making it capable of understanding high-level instructions and generating coherent low-level actions in physical environments. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    Learn Go with Tests

    Learn Go with Tests

    Learn Go with test-driven development

    Katas are fun but they are usually limited in their scope for learning a language; you're unlikely to use goroutines to solve a kata. Another problem is when you have varying levels of enthusiasm. Some people just learn way more of the language than others and when demonstrating what they have done end up confusing people with features the others are not familiar with. This ends up making the learning feel quite unstructured and ad hoc. By far the most effective way was by slowly introducing...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB