ZAP-2.5DSAM: Zero Additional Parameters Advancing 2.5D SAM Adaptation to 3D Tumor Segmentation

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

The Segment Anything Model (SAM) demonstrated outstanding performance in 2D segmentation tasks, exhibiting robust generalization to natural images through its prompt-driven design. However, due to the lack of volumetric spatial information modeling and the domain gap between nature and medical images, its direct application to 3D medical image segmentation is suboptimal. Existing approaches to adapting SAM for 3D segmentation typically involve architectural adjustments by integrating additional components, thereby increasing trainable parameters and requiring higher GPU memory during fine-tuning. Moreover, retraining the prompt encoder may result in degraded spatial localization, especially when annotated data is scarce. To address these limitations, we propose ZAP-2.5DSAM, a parameter-efficient fine-tuning framework, which effectively extends the segmentation capacity of SAM to 3D medical images through a 2.5D decomposition scheme without introducing any additional adapter modules. Our method fine-tunes only 3.51M parameters from the original SAM, significantly reducing GPU memory requirements during training. Extensive experiments on multiple 3D tumor segmentation benchmarks demonstrate that ZAP-2.5DSAM achieves superior segmentation accuracy compared to conventional fine-tuning methods. Our code and models are available at: https://github.com/CaiGuoHS/ZAP-2.5DSAM.git.

Related articles

Related articles are currently not available for this article.