Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Machine Learning for Multimodal Healthcare Data

RobustSsF: Robust Missing Modality Brain Tumor Segmentation with Self-supervised Learning-based Scenario-specific Fusion

Jeongwon Lee · Daeshik Kim

Keywords: [ Multimodal fusion ] [ Medical Imaging ]


Abstract:

All modalities of Magnetic Resonance Imaging (MRI) have an essential role in diagnosing brain tumors, but there are some challenges posed by missing or incomplete modalities in multimodal MRI. Existing models have failed to achieve robust performance across all scenarios. To address this issue, this paper proposes a novel 4encoder-4decoder architecture that incorporates both "dedicated" and "single" models. Our model named SsFnL includes multiple Scenario-specific Fusion (SsF) decoders that construct different features depending on the missing modality scenarios. To train this, we introduce novel self-supervised learning and Couple Regularization loss function (CReg) to achieve robust learning and the Lifelong Learning Strategy (LLS) to enhance model performance. The experimental results on BraTS2018 demonstrate that SsFnL successfully constructs the most robust model, achieving state-of-the-art results in TC and ET sub-regions when T1ce is missing, and in other challenging scenarios and sub-regions.

Chat is not available.