Segment Anything
Provides code for running inference with the SegmentAnything Model
Segment Anything (SAM) is a foundation model for image segmentation that’s designed to work “out of the box” on a wide variety of images without task-specific fine-tuning. It’s a promptable segmenter: you guide it with points, boxes, or rough masks, and it predicts high-quality object masks consistent with the prompt. The architecture separates a powerful image encoder from a lightweight mask decoder, so the heavy vision work can be computed once and the interactive part stays fast. A...