Sa2VA
Official Repo For "Sa2VA: Marrying SAM2 with LLaVA
...It merges the segmentation power of a state-of-the-art video segmentation model (based on SAM‑2) with the vision-language reasoning capabilities of a strong LLM backbone (derived from models like InternVL2.5 / Qwen-VL series), yielding a system that can answer questions about visual content, perform referring segmentation, and maintain temporal consistency across frames in video. With minimal instruction tuning (often one-shot), Sa2VA can handle tasks such as “segment the main subject,” “what are the objects in this scene?”, or “track this object through the video,” outputting pixel-perfect masks or spoken/textual answers as appropriate.