Presentation Information

[3Yin-A-07]Multi-label Supervised Contrastive Learning for Medical Image Classification

〇Eichi Takaya1, Ryusei Inamori1 (1. Tohoku University Graduate School of Medicine)

Keywords:

Medical image,Representation learning

Large-scale supervised pretraining in medical imaging is constrained by the cost of expert disease annotations, whereas routinely available metadata—such as imaging modality and anatomical region—remain underutilized. We propose a multi-label supervised contrastive pretraining framework that encodes modality and anatomical region as multi-hot targets and optimizes a Jaccard-weighted multi-label contrastive objective. A ResNet-18 encoder is pretrained on a subset of RadImageNet. We evaluate transferability by fine-tuning the pretrained encoder on three binary downstream tasks: ACL injury classification in knee MRI, lesion malignancy classification in breast ultrasound, and nodule malignancy classification in thyroid ultrasound. The proposed method achieved the best AUC on ACL (0.964) and Thyroid (0.763), and showed competitive performance on Breast (0.926), ranking second to SimCLR (0.940). These results suggest that leveraging readily available modality/anatomy metadata as supervision can provide an effective and label-efficient initialization for downstream medical image classification when fine-tuning is performed.