Towards Granularity-adjusted Pixel-level Semantic Annotation
Recent advancements in computer vision predominantly rely on learning-based systems, leveraging annotations as the driving force to develop specialized models. However, annotating pixel-level information, particularly in semantic segmentation, presents a challenging and labor-intensive task, prompti...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
04-12-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recent advancements in computer vision predominantly rely on learning-based
systems, leveraging annotations as the driving force to develop specialized
models. However, annotating pixel-level information, particularly in semantic
segmentation, presents a challenging and labor-intensive task, prompting the
need for autonomous processes. In this work, we propose GranSAM which
distinguishes itself by providing semantic segmentation at the user-defined
granularity level on unlabeled data without the need for any manual
supervision, offering a unique contribution in the realm of semantic mask
annotation method. Specifically, we propose an approach to enable the Segment
Anything Model (SAM) with semantic recognition capability to generate
pixel-level annotations for images without any manual supervision. For this, we
accumulate semantic information from synthetic images generated by the Stable
Diffusion model or web crawled images and employ this data to learn a mapping
function between SAM mask embeddings and object class labels. As a result, SAM,
enabled with granularity-adjusted mask recognition, can be used for pixel-level
semantic annotation purposes. We conducted experiments on the PASCAL VOC 2012
and COCO-80 datasets and observed a +17.95% and +5.17% increase in mIoU,
respectively, compared to existing state-of-the-art methods when evaluated
under our problem setting. |
---|---|
DOI: | 10.48550/arxiv.2312.02420 |