Abstract
Annotation ambiguity due to inherent data uncertainties such as blurred boundaries in medical scans and different observer expertise and preferences has become a major ob-stacle for training deep-learning based medical image segmentation models. To address it, the common practice is to gather multiple annotations from different experts, leading to the setting of multi-rater medical image segmentation. Existing works aim to either merge different annotations into the 'groundtruth' that is often unattainable in numerous medical contexts, or generate diverse results, or produce personalized results corresponding to individ-ual expert raters. Here, we bring up a more ambitious goal for multi-rater medical image segmentation, i.e., obtaining both diversified and personalized results. Specifi-cally, we propose a two-stage framework named D-Persona (first Diversification and then Personalization). In Stage I, we exploit multiple given annotations to train a Proba-bilistic U-Net model, with a bound-constrained loss to improve the prediction diversity. In this way, a common latent space is constructed in Stage I, where different latent codes denote diversified expert opinions. Then, in Stage II, we design multiple attention-based projection heads to adaptively query the corresponding expert prompts from the shared latent space, and then perform the personalized medical image segmentation. We evaluated the proposed model on our in-house Nasopharyngeal Carcinoma dataset and the public lung nodule dataset (i.e., LIDC-IDRI). Ex-tensive experiments demonstrated our D-Persona can provide diversified and personalized results at the same time, achieving new SOTA performance for multi-rater medical image segmentation. Our code will be released at https://github.com/ycwu1997/D-Persona.
Original language | English |
---|---|
Title of host publication | Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 |
Publisher | IEEE |
Pages | 11470-11479 |
Number of pages | 10 |
ISBN (Electronic) | 9798350353006 |
ISBN (Print) | 9798350353013 |
DOIs | |
Publication status | Published - Jun 2024 |
Event | 2024 37th IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 - Seattle Convention Center, Seattle, United States Duration: 17 Jun 2024 → 21 Jun 2024 https://cvpr.thecvf.com/Conferences/2024 (Conference website) https://cvpr.thecvf.com/virtual/2024 (Conference website) https://cvpr.thecvf.com/virtual/2024/calendar (conference schedule) https://media.eventhosts.cc/Conferences/CVPR2024/CVPR_main_conf_2024.pdf (Conference program) https://openaccess.thecvf.com/CVPR2024 (Conference proceedings) https://ieeexplore.ieee.org/xpl/conhome/10654794/proceeding (Conference proceedings) |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
Publisher | IEEE |
ISSN (Print) | 1063-6919 |
ISSN (Electronic) | 2575-7075 |
Conference
Conference | 2024 37th IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 |
---|---|
Abbreviated title | CVPR 2024 |
Country/Territory | United States |
City | Seattle |
Period | 17/06/24 → 21/06/24 |
Internet address |
|
Scopus Subject Areas
- Software
- Computer Vision and Pattern Recognition