(ICCV 2025) OmniSAM: Omnidirectional Segment Anything Model for UDA in Panoramic Semantic Segmentation
OmniSAM is a novel framework that adapts the Segment Anything Model 2 (SAM2) for panoramic semantic segmentation by addressing the field-of-view gap between pinhole and 360° images. It divides panoramas into patches, leverages SAM2's memory mechanism for cross-patch correspondences, and introduces an FoV-based prototypical adaptation module to enhance feature alignment and model generalization. This approach outperforms state-of-the-art methods in unsupervised domain adaptation for panoramic semantic segmentation tasks.
How the donated funds are distributed
Kivach works on the Obyte network, and therefore you can track all donations.