Authors: Samrajya Thapa, Koushik Howlader, Subhankar Bhattacharjee, Wei le
Abstract: In this paper, we introduce a novel Multi-Modal Contrastive Pre-training
Framework that synergistically combines X-rays, electrocardiograms (ECGs), and
radiology/cardiology reports. Our approach leverages transformers to encode
these diverse modalities into a unified representation space, aiming to enhance
diagnostic accuracy and facilitate comprehensive patient assessments. We
utilize LoRA-Peft to significantly reduce trainable parameters in the LLM and
incorporate recent linear attention dropping strategy in the Vision
Transformer(ViT) for smoother attention. Furthermore, we provide novel
multimodal attention explanations and retrieval for our model. To the best of
our knowledge, we are the first to propose an integrated model that combines
X-ray, ECG, and Radiology/Cardiology Report with this approach. By utilizing
contrastive loss, MoRE effectively aligns modality-specific features into a
coherent embedding, which supports various downstream tasks such as zero-shot
classification and multimodal retrieval. Employing our proposed methodology, we
achieve state-of-the-art (SOTA) on the Mimic-IV, CheXpert, Edema Severity, and
PtbXl downstream datasets, surpassing existing multimodal approaches. Our
proposed framework shows significant improvements in capturing intricate
inter-modal relationships and its robustness in medical diagnosis that
establishes a framework for future research in multimodal learning in the
healthcare sector.
Source: http://arxiv.org/abs/2410.16239v1