Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels
Large-scale vision-language models like CLIP have demonstrated impressive open-vocabulary capabilities for image-level tasks, excelling in recognizing what objects are present. However, they struggle with pixel-level recognition tasks like semantic segmentation, which additionally require understand...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
29-09-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Large-scale vision-language models like CLIP have demonstrated impressive
open-vocabulary capabilities for image-level tasks, excelling in recognizing
what objects are present. However, they struggle with pixel-level recognition
tasks like semantic segmentation, which additionally require understanding
where the objects are located. In this work, we propose a novel method,
PixelCLIP, to adapt the CLIP image encoder for pixel-level understanding by
guiding the model on where, which is achieved using unlabeled images and masks
generated from vision foundation models such as SAM and DINO. To address the
challenges of leveraging masks without semantic labels, we devise an online
clustering algorithm using learnable class names to acquire general semantic
concepts. PixelCLIP shows significant performance improvements over CLIP and
competitive results compared to caption-supervised methods in open-vocabulary
semantic segmentation. Project page is available at
https://cvlab-kaist.github.io/PixelCLIP |
---|---|
DOI: | 10.48550/arxiv.2409.19846 |