Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation
The Segment Anything Model (SAM) and CLIP are remarkable vision foundation models (VFMs). SAM, a prompt driven segmentation model, excels in segmentation tasks across diverse domains, while CLIP is renowned for its zero shot recognition capabilities. However, their unified potential has not yet been...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
09-04-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Segment Anything Model (SAM) and CLIP are remarkable vision foundation
models (VFMs). SAM, a prompt driven segmentation model, excels in segmentation
tasks across diverse domains, while CLIP is renowned for its zero shot
recognition capabilities. However, their unified potential has not yet been
explored in medical image segmentation. To adapt SAM to medical imaging,
existing methods primarily rely on tuning strategies that require extensive
data or prior prompts tailored to the specific task, making it particularly
challenging when only a limited number of data samples are available. This work
presents an in depth exploration of integrating SAM and CLIP into a unified
framework for medical image segmentation. Specifically, we propose a simple
unified framework, SaLIP, for organ segmentation. Initially, SAM is used for
part based segmentation within the image, followed by CLIP to retrieve the mask
corresponding to the region of interest (ROI) from the pool of SAM generated
masks. Finally, SAM is prompted by the retrieved ROI to segment a specific
organ. Thus, SaLIP is training and fine tuning free and does not rely on domain
expertise or labeled data for prompt engineering. Our method shows substantial
enhancements in zero shot segmentation, showcasing notable improvements in DICE
scores across diverse segmentation tasks like brain (63.46%), lung (50.11%),
and fetal head (30.82%), when compared to un prompted SAM. Code and text
prompts are available at: https://github.com/aleemsidra/SaLIP. |
---|---|
DOI: | 10.48550/arxiv.2404.06362 |