Call us Today ! +918886268863 | [email protected]

Relation-induced Multi-modal Shared Representation Learning for Alzheimer ’s disease Diagnosis

Relation-induced Multi-modal Shared Representation Learning for Alzheimer ’s disease Diagnosis

Introduction:

ALZHEIMER’s disease (AD), as one of the most common neurodegenerative diseases in elderly people, is characterized by irreversible loss of neurons and genetically complex disorder [1]. As the disease progresses, it will result in irreversible brain atrophy and make patients need around-the clock care which places economic and psychological burdens. Fortunately, early diagnosis of AD is beneficial to patient care and help to slow down progressive deterioration [2]. Thus, accurate identification of AD and its prodromal stage, i.e., mild cognitive impairment (MCI), has drawn extensive attention.

Abstract:

The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer’s disease (AD) by providing complementary structural and functional information. However, most of the existing methods simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminant characteristics for AD identification. Meanwhile, how to overcome the overfitting issue caused by high-dimensional multimodal data remains appealing. To this end, we propose a relation induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizes) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis.

Existing work:

Previously proposed a multi-kernel learning (MKL) based model to fuse multi-modal features by simultaneously learning kernel weights and a maximum margin classifier. Learned a latent space that preserved the specific information of multi-modal data and then projected the features in the latent space into label space for performing prediction. Utilized canonical correlation analysis (CCA) to combine multi-modal information by mapping original multi-modal data to a common space and constructed support vector models for joint regression and classification of AD.

Disadvantages:

Although these methods are promising, how to explore the underlying associations inherent in multimodal data and generate distinguishing representations for AD diagnosis is still challenging.

Proposed work:

We propose a relation induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis.

Advantage:

 The proposed method learns latent discriminant representations in a task-driven manner by integrating representation learning and classifier into a unified framework.

System requirements:

  Software requirements:

  • Operating system           :   Windows.
  • Coding Language :   Python.

Hardware components:

System                 :   Pentium IV 2.4 GHz or intel

Hard Disk                      :   40 GB.

Floppy Drive       :   1.44 Mb.

Mouse                  :   Optical Mouse.

Ram                      :   512 Mb.

Conclusion:

In this paper, we propose a relation-induced multi-modal shared representation learning framework for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to induce learning potential associations inherent in multi-modal data and alleviate overfitting, respectively. Then we project the shared representations into target space for AD diagnosis. The experimental results demonstrate that our proposed method not only outperforms several state-of-the-art methods, but also identifies some potential biomarkers for AD diagnosis. In the future work, we will investigate the feasibility of using our proposed method in the diagnosis of other brain diseases.

March 19, 2022

0 responses on "<strong>Relation-induced Multi-modal Shared Representation Learning for Alzheimer ’s disease Diagnosis</strong>"

Leave a Message

Template Design © VibeThemes. All rights reserved.