iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.3390/s21217034
Application of Deep Learning Models for Automated Identification of Parkinson’s Disease: A Review (2011–2021)
Next Article in Journal
Diagnosis of Pneumonia by Cough Sounds Analyzed with Statistical Features and AI
Next Article in Special Issue
Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features
Previous Article in Journal
Long-Term Monitoring of a Tunnel in a Landslide Prone Area by Brillouin-Based Distributed Optical Fiber Sensors
Previous Article in Special Issue
Deep Mining Generation of Lung Cancer Malignancy Models from Chest X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Deep Learning Models for Automated Identification of Parkinson’s Disease: A Review (2011–2021)

1
School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore
2
Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
3
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
4
School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
5
School of Sciences, University of Southern Queensland, Springfield, QLD 4300, Australia
6
Centre of Clinical Genetics, Sydney Children’s Hospitals Network, Randwick, NSW 2031, Australia
7
School of Women’s and Children’s Health, University of New South Wales, Randwick, NSW 2031, Australia
8
School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
9
Department of Bioinformatics and Medical Engineering, Asia University, Taichung 413, Taiwan
10
Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(21), 7034; https://doi.org/10.3390/s21217034
Submission received: 20 September 2021 / Revised: 7 October 2021 / Accepted: 19 October 2021 / Published: 23 October 2021

Abstract

:
Parkinson’s disease (PD) is the second most common neurodegenerative disorder affecting over 6 million people globally. Although there are symptomatic treatments that can increase the survivability of the disease, there are no curative treatments. The prevalence of PD and disability-adjusted life years continue to increase steadily, leading to a growing burden on patients, their families, society and the economy. Dopaminergic medications can significantly slow down the progression of PD when applied during the early stages. However, these treatments often become less effective with the disease progression. Early diagnosis of PD is crucial for immediate interventions so that the patients can remain self-sufficient for the longest period of time possible. Unfortunately, diagnoses are often late, due to factors such as a global shortage of neurologists skilled in early PD diagnosis. Computer-aided diagnostic (CAD) tools, based on artificial intelligence methods, that can perform automated diagnosis of PD, are gaining attention from healthcare services. In this review, we have identified 63 studies published between January 2011 and July 2021, that proposed deep learning models for an automated diagnosis of PD, using various types of modalities like brain analysis (SPECT, PET, MRI and EEG), and motion symptoms (gait, handwriting, speech and EMG). From these studies, we identify the best performing deep learning model reported for each modality and highlight the current limitations that are hindering the adoption of such CAD tools in healthcare. Finally, we propose new directions to further the studies on deep learning in the automated detection of PD, in the hopes of improving the utility, applicability and impact of such tools to improve early detection of PD globally.

1. Introduction

The purpose of this systematic review is to provide a comprehensive review of automated Parkinson’s disease (PD) detection using deep learning models, and to further promote deep learning models as a potential computer-aided diagnostic (CAD)-based tool for clinical decision support systems. In Section 1, we introduced the background of PD, the limitation of the current diagnostic method, and the CAD tool being a possible solution to alleviate the burden of neurologists. Thereafter, we elaborated on the benefit of deep learning models over machine learning models as a CAD tool and illustrated the mechanics of the two most popular types of deep learning models: convolutional neural network (CNN) and long short-term memory (LSTM). Section 2 describes the adoption of the PRISMA model for the systematic review of automated PD detection studies using deep learning models. To build the systematic review, a total of 63 studies were chosen after a systematic removal of the irrelevant studies. In Section 3, these studies were then split into two categories: brain analysis and motor symptoms. Subsequently, data analysis and visualization were performed for each category. In Section 4, we also discussed the current trend observed from the 63 research studies, the limitations of deep learning models for CAD detection, and presented the proposed directions for future work which can increase the adoption of deep learning models as a CAD tool. Finally, Section 5 concludes the review by summarizing the key findings, limitations, and the potential of deep learning models as a CAD tool to support clinical decisions.

1.1. Background

PD is an incurable neurological disease that results in progressive deterioration within the central nervous system and debilitating neurological symptoms [1]. The underlying cause of the neurodegeneration in PD is still partially understood, but key pathophysiological features are the gradual loss of dopaminergic neurons in a part of the midbrain known as substantia nigra pars compacta (SNpc), and the accumulation of misfolded alpha-synuclein protein in ‘Lewy bodies’ within the cytoplasm of neuronal cells in several different brain regions [2]. The dopaminergic pathway between the SNpc and the dorsal striatum, also known as the nigrostriatal pathway, is critical for movement control. Hence, disruption to the nigrostriatal pathway results in motor abnormalities in affected individuals with PD, including tremors, rigidity, and bradykinesia [3]. Affected individuals also experience non-motor symptoms, including constipation, depression, sleeping disorders, and reduction of smell [1,3].
Between 1990 and 2016, the number of people diagnosed with PD had doubled from 2.5 million to 6.1 million. This means the age-standardized prevalence rate increased by 21.7% [4]. Hence, PD is one of the most prevalent neurological disorders, with immense societal impacts, yet no curative treatments [5]. The gold standard treatment for PD is the dopamine precursor amino acid levodopa, which, in the initial stages of PD at least, can alleviate many motor symptoms by substituting for striatal dopamine loss [6]. However, its use can be complicated by the development of motor complications, including drug-induced dyskinesias, and patients also have L-DOPA-resistant motor features including treatment-resistant tremor, postural instability, swallowing and speech disorders [2]. A range of modifications of dopaminergic treatments, as well as non-dopaminergic pharmacological therapies and non-pharmacological treatments such as deep brain stimulation, may be required over time. Rehabilitation and psychosocial supports are also key to try and maintain affected individuals’ quality of life, and thus early diagnosis to allow instigation of expert multidisciplinary care is a key priority. Moreover, novel therapies that may actually modify the underlying disease processes are the goal of a large body of global research: it is likely that such advanced therapeutics, such as gene therapy, will need to be instigated as soon as possible in order to have maximal effect, as has been found to be the case for other degenerative conditions such as spinal muscular atrophy [7]. Therefore, early diagnosis is especially crucial in the optimal current and future management of PD, to ensure maximal functional outcomes for affected individuals.
At present, the diagnosis of PD is based on core clinical features, and the accuracy of clinical diagnosis can be improved by following standard clinical criteria, such as the UK Parkinson’s Disease Society Brain Bank (UKPDSBB) [8], such as the presence of bradykinesia and absence of certain exclusion criteria. This clinical criteria rely on the expertise of a neurologist, but still are flawed: for example the diagnostic accuracy using the UKPDSBB, even in specialist neurology centres, is only slightly above 80%, compared to post-mortem pathological examination as gold standard [9]. Moreover, there is a global shortage of neurologists, especially in countries experiencing aging populations where there is a high frequency of neurological disorders [10]. This increases the waiting time for affected individuals to get diagnosed with PD. As a consequence, 60% of the dopaminergic neurons are typically lost by the time of diagnosis [2].
In efforts to meet the healthcare demands, there are interest in the possibility of using CAD tools based on artificial intelligence methods, namely machine learning (which potentially involves the more conventional pattern recognition approaches) or deep learning (which may involve sophisticated multi-layered neuronal systems), to perform an automated diagnosis of PD [11,12,13]. These CAD tools can perform automated detection using the biomarkers of PD, such as Electroencephalogram (EEG) signals, posture analysis in the gait cycle, voice aberration, or brain imaging such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) [14]. In a conventional machine learning model, it is mandatory to extract the features from the biomarkers and then select the most salient features in order to train the model [15,16,17,18,19]. This is a required step because machine learning models by itself are not capable of learning the high dimensional data in their raw forms, otherwise, the model is likely to overfit the dataset [20]. Also, the selection of the most relevant features must be carried out by an experienced expert system that is knowledgeable in terms of various feature selection tools [15,16]. This has led to the somewhat poor adoption of machine learning models as the future CAD tools as feature extraction and selection can be complicated procedures comprehensible by machine learning experts, but not so by the end-user of the CAD tool [21,22]. Such end-users may involve healthcare experts such as practicing clinicians, health researchers, or other domain applications.
Deep learning models, which are of increasing interest with big data and can resolve some of the limitations of machine learning models by eliminating the need for feature selection, feature extraction tools. Such models are capable of learning the high-dimensional data, and they may function analogously to the neurons in the human brain [23]. The conventional forms of machine learning models known as artificial neural networks (ANN) consist of three main layers: the input, the hidden, and the output layer as shown in Figure 1. All three layers within a neural network contain artificial neurons that are interconnected, as denoted by the black lines. As the neural network learns via a learning algorithm (e.g., backpropagation), the weights of the connection (black lines) between the neurons update iteratively [23]. The neurons, which act as an individual classifier, determines the output signal after processing the weights from its previous connections [23].
When an ANN model has been constructed into an architecture that has more than one hidden layer, the system is then known as deep neural networks (DNN), and such systems are then capable of learning the data with a higher degree of complexity [23] (Figure 1). In deep learning algorithms, there are often other classes of model, such as CNN, recurrent neural network (RNN), and LSTM, that utilize DNN as their basic principal architecture.

1.2. Convolutional Neural Network (CNN)

In any CNN model, the input layer of a typical DNN model is replaced by a series of convolutional and pooling layers, as also shown in Figure 2. If DNN is described as the neurons in our brain, then the CNN may be considered as the human visual system [24]. The first convolutional layer contain numerous filters which extract features from the input image to generate multiple feature maps. The subsequent pooling and convolutional layers reduce the dimension of the feature maps and further enhance the features, thereby reducing the complexity of the feature map and the likelihood of overfitting [25]. This could be considered as analogous to the human visual system, where the visual cortex attempts to break down images into simpler representations for the brain to perceive the image with ease [24].
After the final pooling layer, the feature maps are converted into single-list vectors at the flatten layer (Figure 2). The neurons in the neural networks, also known as the fully connected layers, will then learn to recognize the features from the single-list vectors and perform image classifications [25]. Hence, CNN models are known for their exemplary image recognition ability, which many studies have successfully demonstrated the success of CNN in medical imaging, including the recognition of breast tumors, and eye diseases using mammogram and color fundus images, respectively [26]. Apart from medical images, CNN has also demonstrated success in biometric face recognition systems for human tracking purposes [27,28].

1.3. Long Short-Term Memory (LSTM)

The LSTM model is an improvement from its predecessor methods known as RNN [16]. Just like its name suggests, the LSTM model attempts to mimic how the brain stores memories and makes predictions based on immediate past events stored in the memories [24]. Both the RNN and the LSTM models are known for their ability to recognize patterns in sequential data [16]. However, the vanishing gradient has often been a very common problem in RNN models, where a large information gap exists between the new and old data, causing erroneous signals to vanish during the model’s training phase. As a result, the RNN model is not able to learn the data that has long-term dependencies. Hence, the LSTM model has been developed to resolve the problems of vanishing gradient in RNN models [29].
The neurons in a typical LSTM model adopt a unique gate structure [30] denoted as the forget gate, input gate, and output gates (Figure 3). The input gate decides if the new information (xt) should be stored in the cell, the output gate decides what information should output as the hidden state (ht), and the key to eliminating the vanishing gradient problem lies in the forget gate [30,31]. The sigmoid ( σ ) function in the forget gate is used to deduce if the information brought from the previous cell state (Ct−1) should be kept or forgotten, thereby removing irrelevant data, and reset the information in the cell appropriately [30,31]. This prevents large discrepancies between the old and new information that will eventually lead to vanishing gradient problems. In addition, useful information continuously gets backpropagated in the LSTM model, allowing it to memorize patterns in long-term dependencies [30,31]. Hence, the strong pattern recognition ability of LSTM models is widely implemented in applications such as speech and handwriting recognition [32,33]. LSTM models are also suitable in forecasting stock prices in financial markets which are dynamic and non-linear in nature [34,35].

2. Materials and Methods

This systematic review applied the PRISMA model [36] to analyze the most relevant studies on PD detection using deep learning models from the period January 2011 to July 2021. All the resources were systematically searched through PubMed, Google Scholar, IEEE, and Science Direct using the Boolean search strings, as shown in Table 1. A total number of 794 studies that contained these Boolean search strings were identified, which also included 178 studies from PubMed, 248 studies from Google Scholar, 135 studies from IEEE, and 233 studies from Science Direct. From the 794 articles initially identified, a total of 110 duplicate studies were removed. After this, a total of 612 articles (61 traditional Machine Learning studies, one non-human study, 104 conference papers, 402 Non-CAD for PD studies, 14 irrelevant studies, 14 non-English articles, and 16 books) were also excluded according to their relevance with this review. Eight studies were further removed from the list as they did not provide model-accuracy results. The final number of research studies that qualified for inclusion in this review was set to 63. Figure 4 shows a detailed process of the PRISMA method in the selection of the most relevant articles.

3. Results

There are two parts to this section. Section 3.1 Brain analysis covers 23 deep learning studies performed on Single Photon Emission Computed Tomography (SPECT), PET, MRI, ultrasound, and EEG. Section 3.2 Motor symptoms covers 40 deep learning studies performed on gait, handwriting, speech, Electromyogram (EMG), and other movement-related tests. The details of the deep learning studies under brain analysis and motor symptoms categories are in Appendix A Table A1 and Table A2, respectively.

3.1. Brain Analysis

MRI, PET, and SPECT are the common brain imaging modalities used to diagnose PD. The public image dataset for these three imaging modalities can be downloaded from Parkinson’s Progression Markers Initiative (PPMI) database (https://www.ppmi-info.org/, accessed on 12 October 2021). Numerous studies in Appendix A Table A1 had attempted to develop deep learning models to distinguish the brain of PD patients from healthy controls. Among them, a majority of the studies had chosen SPECT images to train their deep learning models; 8 studies used SPECT images, 5 studies used MRI images, and 3 studies used PET images (Figure 5). Studies that had used SPECT images for automated PD detection also achieved a higher model performance, as compared to MRI and PET images (Figure 6). This may be because DaTscan is used for SPECT imaging. DaTscan is the name of the radioactive tracer, ioflupane (I123), that is specifically used to detect dopamine transporters in the brain [37]. Hence, it can better represent the loss of dopaminergic neurons in the PD brain [38]. On the other hand, the radioactive tracer used in PET for PD diagnosis is known as 18F-FDG, which is primarily used to assess neuronal function via regional cerebral glucose metabolism [39].
A majority of the studies that focused on image analysis proposed CNN models for an automated detection of PD (Figure 5). For the case of SPECT imaging, the highest performing CNN model was developed by the study of Choi et al. [37], which had evaluated their proposed model (i.e., PD net) with two datasets: the PPMI dataset, which obtained an accuracy of 96%, and a private dataset (SNUH cohort) with an accuracy of 98.8% (Figure 7, Appendix A Table A1). Both results exceeded the performance of two human raters whose accuracies were 90.7% and 84% each for the PPMI dataset. There was only one study by Ozsahin et al. [40] that has proposed a back-propagation neural network (BPNN), which achieved the highest model accuracy of 99.6% using the binarized image of SPECT images (Figure 7, Appendix A Table A1). However, the applicability of the CNN model has been advocated in a majority of studies in SPECT imaging (Figure 5). In any event, we aver that for practical and ethical purposes, the suitability of the CNN or the BPNN model for SPECT imaging should still be assessed via clinical trials. As for the PET and the MRI study cases, we note that the highest performing CNN model was 93% [41] and 95.3% [42], respectively (Figure 7, Appendix A Table A1).
To date, only the study of Shen et al. [43] had attempted to use ultrasound, namely transcranial sonography (TCS) images for automated PD detection (Appendix A Table A1). They proposed a deep learning model known as Multiple kernel mapping—broad learning system (MEKM-BLS) that has a wider feature and enhancement node/neurons than a typical DNN model. This method has the ability to map the features from the feature node directly onto the enhancement node. However, their model only achieved an accuracy of 78.4%, lower than that of MRI, PET and SPECT. Nonetheless, ultrasonography has several advantages such as low cost, fast, and does not have radiation exposure [44]. Furthermore, a study by Mehnert et al. [44] demonstrated that interpretation of TCS for PD diagnosis can reach a sensitivity score of 95% by experienced sonographers. Hence, there is room for improvement for ultrasonography in automated PD detection, and future work to implement CNN models for the interpretation of TCS images should be considered.
Apart from brain imaging issues, the physiological signals such as the EEG can also reflect brain abnormalities that are unique to the prevalence of PD [45]. This aspect has been reported, particularly that the EEG frequency of a PD patient is abnormally slow, compared to that of a healthy individual [46]. In this review, we have found 6 studies that had proposed deep learning models to recognize EEG characteristics for automated detection of PD. Nearly half of these studies proposed the use of the CNN model [25,47,48], and the remaining three studies had proposed the application of an RNN [49], DNN [50], and a hybrid deep learning model that combines CNN and RNN algorithms [51] (Figure 5). The best-performing model was developed by Khare et al. [47], who has also proposed a CNN model with smoothed pseudo-Wigner Ville distribution (SPWVD) features from EEG signals as an input, and further obtained an accuracy near 100% (Figure 7, Appendix A Table A1). This shows that CNN models are likely to achieve a high classification accuracy for one-dimensional data such as EEG signals. Like the data of ultrasound tests, the EEG data are somewhat cheaper and offer a low-risk alternative to the MRI, PET, and SPECT datasets, but unlike ultrasound, the overall accuracy of studies that implemented EEG signals (95.8%) is on par with studies that have used SPECT images (94.1%) (Figure 6).

3.2. Motor Symptoms

Since PD is characterized by involuntary motor control, an assessment of motor can be utilized for the diagnosis of PD. Such assessments could include gait, handwriting, speech, and other movement-related tests as illustrated in Figure 8.
In principle, Gait refers to the walking patterns of an individual. In the case of PD, the body’s stiffness and postural instability may worsen as the disease progresses, leading to gait disturbance [52]. In this respect, the gait features can be utilized to train deep learning models in the detection of PD. The key features of gait include kinetic features such as ground reaction force (GRF) and kinematics features such as stance and swing phase of the foot [52]. There are currently 11 deep learning studies that have attempted to analyze the gait for PD detection, and a wide variety of deep learning models have thus been proposed (Figure 9, Appendix A Table A2). Among them, two studies while proposing a set of hybrid models by combining the CNN and LSTM model have achieved a high overall accuracy [53,54] (Figure 9). The best-performing hybrid CNN-LSTM model was also proposed by Xia et al. [53], using vertical GRF at multiple points of time during the gait cycle. The idea of implementing a hybrid CNN-LSTM model for gait analysis is to have the CNN layer extract the salient gait features, and the LSTM layer to analyze the temporal pattern of the gait features in a walking cycle. As a result, Xia et al. [53] achieved the highest model accuracy of 99.1% (Figure 9, Appendix A Table A2), using a dataset that came from three research groups: [55,56,57]. Similarly, two other studies that had proposed DNN [58] and LSTM [59] model also achieved high-performance results that are on par with the CNN-LSTM model (Figure 9, Appendix A Table A2). Hence, future deep learning studies based on gait analysis could focus on the development and implementation of these three models.
The deterioration of handwriting ability is another telltale symptom of PD, and this is often seen in a majority of PD patients but is not included as a diagnostic criterion of PD [60]. A PD patient may exhibit abnormally small handwriting, termed micrographia, due to rigidity and tremors in the writing arm [61]. Thirteen studies on deep learning algorithms have attempted to diagnose PD using handwritten drawings with one of the three common PD handwriting datasets: PaHaW dataset [62], HandPD [63], and NewHandPD [64]. All three datasets involve a series of drawing and writing tests, and one of the common tests that exist in all three datasets is the spiral drawing test. Similar to the brain imaging, most studies had proposed using CNN models to differentiate handwritten drawings of PD patients from healthy controls (Figure 10). The best performance was achieved by Kamran et al. [65] who has tested the six common transfer learning architecture of CNN, namely AlexNet [66], GoogleNet [67], VGGNet-16/19 [68], and ResNet-50/101 [69]. These transfer learning models have been previously trained using a well-known image dataset known as ImageNet which consists of more than 1 million images. Kamran et al. [65] then fine-tuned the transfer learning models to adapt to the handwritten drawings of PD and healthy controls, and the highest model accuracy was achieved by AlexNet [66] with 99.22% (Figure 10, Appendix A Table A2).
Only two studies have to far attempted to use a small-scale movement-related test like swallowing [70] and finger tapping [71] (Appendix A Table A2). These two studies had proposed different deep learning models each, and the best performance of 82.3% was achieved by Jones et al. [70], using an ANN model with video-fluoroscopic and manometric data collected from the boluses which were delivered to the subject’s oral cavity using a syringe. Videofluoroscopic data includes information like laryngeal, hyoid, and epiglottic movement, while manometric data includes information such as rise time and rate of the velopharynx and mesopharynx.
Besides the visible movement disorder, the muscle control of speech is also affected in PD [72]. As a consequence, people with PD will experience voice abnormalities such as lower voice volume and slurred speech [72]. There are currently twelve studies that had attempted to use voice aberration to diagnose PD (Figure 11, Appendix A Table A2). A wide variety of deep learning models were proposed with half of these studies being on CNN models (Figure 11). Two of the CNN models were seen to achieve a high model accuracy of 99.5% [73] and 99.4% [74] (Figure 11, Appendix A Table A2). However, the best performing model was developed by Ali et al. [75] who proposed a genetically optimized neural network (GONN) with a model accuracy of 100% (Figure 11, Appendix A Table A2). At present, more studies had supported CNN model for speech analysis. Nonetheless, it should be noted that clinical trials are required to further justify if GONN or CNN is a better alternative for speech analysis.
Like the analysis of the brain, motor symptoms of PD can also be assessed by physiological signals, namely EMG. However, only one deep learning study has attempted to use EMG for PD diagnosis with the ANN model [76], and the performance of their proposed model was 71%, less than that of the studies that focused on gait, handwriting, and speech (Appendix A Table A2). Hence, for EMG to be recognized as a potential biomarker for PD diagnosis, more research in this area is required. Otherwise, datasets such as handwriting and speech recordings, which have easier data collection procedures, are better alternatives than EMG.
Lastly, two studies did not limit themselves to only one type of modality (Appendix A Table A2). The study of Vasquez-Correa et al. [77] used three input signals—speech, handwriting, and gait—for multimodel analysis of PD using the CNN model and achieved 97.6% accuracy. Oung et al. [78] used two input signals based on speech and motion data derived from wearable sensors to propose an extreme learning machine (ELM) for the detection of PD. Their ELM model architecture is similar to an ANN model whereby there is only one hidden layer in its network but the training process of an ELM differs from the ANN model. Basically, the ELM model only requires a single iteration for model training through a random selection of the most optimal hidden neurons, which results in a much faster training time and a lesser overfitting problem compared with the ANN model [79]. The model accuracy of ELM obtained by the study of Oung et al. [78] was 95.9%, and this figure is comparable to the accuracy of the CNN model proposed by Vasquez-Correa et al. [77] (Appendix A Table A2). Based on a synthesis of these information, we conclude that deep learning models that are also capable of multimodel analysis of PD, may be a useful practical tool for neurologists. In the future, as more clinical information and particularly the detailed and correctly labelled electronic datasets are available, deep learning models may further aid in the diagnosis of PD. Hence, future studies on deep learning should perhaps consider using multiple types of input signals for PD detection, instead of relying on just a single modality.

4. Discussion

There are five parts to this section. Section 4.1 provides the summary of results gathered from the previous section. Section 4.2 discusses the challenges that are affecting the adoption of CAD in healthcare. Section 4.3 provides solutions to tackle the challenges highlighted in Section 4.2 and Section 4.4 describes the future vision of the CAD tool in the diagnosis of PD with Section 4.5 listing down the limitations of this review.

4.1. Result Summary

The application of deep learning models as a CAD tool for automated diagnosis of PD have been gaining popularity over many years. From Figure 12, the number of deep learning studies as of July 2021 has reached 12, which is more than half of the studies in 2020 (18 studies). Hence, it is very likely that the number of studies by the end of 2021 will exceed that of 2020. Every year, the number of deep learning studies bases on motor symptoms exceed that of brain analysis (Figure 12). This might be due to the ease of data acquisition for motor symptoms as the collection of data is less complicated than brain analysis and most of the datasets are publicly available. The overall model performance achieved by deep learning studies in each modality is favorable, especially for common modalities like MRI, PET, SPECT, EEG, gait, handwriting, and speech, which overall model accuracy had all exceeded 80% (Figure 13).
This review underscores the following key aspects of the current deep learning studies for automated PD diagnosis:
  • Deep learning models proposed by various studies have achieved a high predictive accuracy for the diagnosis of PD (Figure 13).
  • About 57% of the deep learning studies for automated PD detection had proposed using the CNN model (Figure 14).
  • CNN models have demonstrated to have high prediction accuracy for image classification such as brain imaging (SPECT, PET, and MRI), and handwriting recognition.
  • Our results have also shown that CNN has good performance in detecting abnormalities from one-dimensional signals like EEG [47] and speech [73].
  • Gait analysis, on the other hand, seems to perform better with either hybrid model (CNN-LSTM), DNN, or LSTM model. However, more research is required to determine the best-performing model.
  • Apart from CNN model, Ozsahin et al. [40] and Ali et al. [75] proposed BPNN and GONN for SPECT and speech analysis respectively and obtained the highest prediction accuracy.
  • However clinical trials are required to prove the suitability of the proposed deep learning model for each modality.

4.2. Challenges Faced by CAD Tools in Healthcare Adoption

Despite the high prediction accuracy obtained by many deep learning models proposed in various automated PD detection studies, the adoption of the deep learning model as a CAD tool in healthcare is currently not supported [21,22]. In their current form, neither neurologists nor other healthcare workers are comfortable to rely on CAD tools to diagnose the PD. This is due to several challenges as listed below:
  • Lack of standards
The diagnosis of PD have been reliant on clinical features for several years, and neurologist have been trained to recognize the sets of clinical features to determine a diagnosis [8]. For instance, the diagnosis criteria provided by UKPDSBB (i.e., presence of bradykinesia and absence of certain exclusion criteria), is not adopted by current deep learning, and even machine learning studies. Instead, a majority of the deep learning studies in this review have focused on only one modality instead of adopting a multimodal approach, which is not practical for clinical use. Deep learning models also do not recognize the features of PD the same way as a human neurologist would do. For example, deep learning models can detect PD from brain imaging by means of a vectorized image instead of a clinical feature, which does not follow the existing diagnosis criteria [80]. Hence, neurologists may be too hesitant to use the CAD tools which greatly deviates from their comfort zone or does not provide a clinically trusted artificial intelligence framework that is also explainable and interpretable for future clinical practice purposes.
  • Poor interpretability
Deep learning models are also known as the ‘black box’ so it is almost impossible to clearly understand the mechanisms behind a deep learning model when it makes a given prediction [22,23]. Despite achieving high prediction accuracy, end-users of the CAD tools (e.g., neurologists and healthcare workers) cannot make a diagnosis without sufficient evidence, and this evidence is not currently provided by deep learning models [21,23]. Hence, neurologists are not able to trust the CAD tools as they cannot afford to make a diagnosis without concrete evidence, explainability and interpretability of the somewhat black box style method used to produce an outcome.
  • Psychological barriers
In healthcare industry, human behavior must always be considered when designing a CAD tool for a target consumer audience. The common psychological barriers that are affecting the adoption of new technologies are the endowment effect and the status quo bias. The endowment effect is where an individual values their possessions higher than their original market value [81] whereas the status quo bias is the preference of an individual to remain in their comfort zone and maintain their environment in the same state [82]. Both of these emotional biases are likely to cause an individual, neurologist, for example, to feel a significant sense of loss when they switch from manual diagnosis to relying on CAD tool for diagnosis.
There are many other factors such as the difficulty of obtaining regulatory approval and poor interoperability, which refers to the ability to communicate between two systems [22]. For example, if two hospitals used different electronic health systems, the data from these two hospitals may not be coherent and might not communicate with each other. These two concerns, however, should come after a prototype for the CAD tool has been developed. For instance, a developer must first develop a working prototype before applying for the necessary International Organization for Standardization certifications. At present, research on using the deep learning model as a CAD tool has yet to attract end-users, and to further convince them to support the implementation of CAD tools in healthcare systems. As such, researchers must tackle the three main challenges listed above and improve the versatility of existing deep learning models. Only when the end-users are satisfied with the outcome (i.e., explainability) and the benefits (i.e., accuracy of feature extraction) of the CAD tool, they may become more willing to support the adoption of the CAD tool in healthcare. In the absence of this perceived requirement, research into a CAD-based tool for automated detection of PD and even some of the other diseases may continue to result in the ‘valley of death’, where applied research accumulates without being translated into real clinical practice. This can leading to a widening of the gap between applied research and translation of its benefits into clinical practice [83].

4.3. Solutions to Promote Adoption of CAD

Moving forward with an aim to translate the potential benefits of deep learning methods into future clinical practices, researchers and end-users need to better understand that the CAD-based tool should not position itself to replace an end-user’s role in diagnosing the disease. This is a common misunderstanding as deep learning and machine learning studies often claim high success of their proposed models with the absence of end-user involvement. Consequently, a false notion of CAD tool replacing the end-users is created. Therefore, the CAD tool should aim to provide alternatives and better opinions in the diagnosis of disease for the end-users to consider, thereby increasing the end-user’s confidence and used for reducing errors simultaneously. The adoption of CAD tool, hence, should improve the efficiency of clinical diagnosis and to further help predict the possible disease and identify alternative treatment options for end-users like clinicians to consider in their days to day work. However, it appears too often that both deep learning and machine learning models do not provide additional information other than their predicted results so this may not be helpful to the end-users as a futuristic prediction tool that is not supported by visible clinical features, nor by detailed explanation of how it arrived at the results. Hence, the authors of future deep learning studies used for automated PD detection, and also for the other disease should include visual cues, such as segmentation as an explanatory function in their deep learning architecture. An example of the workflow process that we propose for a practical CAD tool is illustrated in Figure 15.
In Figure 15, we present two alternatives. The first alternative is to configure a deep learning model that can perform the diagnosis (i.e., identification of the ailment) and segmentation (i.e., explanation, or detailed information) simultaneously. The second alternative is to perform diagnosis in the first stage, and in the second stage, segmentation is performed only on the input image or signal that had been diagnosed as PD in the first stage. In either case, it will be useful to provide additional information like the time frame for abnormal physiological signals, striatal volume, and percentage of dopaminergic neurons lost for image analysis. Also, deep learning models and even machine learning models are comprised of complicated algorithms that neurologists may not necessarily understand. Hence, visual cues could make up for the poor interpretability of deep learning models by allowing neurologists to ‘see’ what has been identified as abnormalities by the model.
The provision of visual cues may greatly contribute to the acceptance of CAD tools in healthcare. Looking at the behavioral trade-off matrix in Figure 16, innovation products are known to fall in either one of the categories [84]. At present, neurologists rely on clinical features and visual inspection to diagnose PD. However, the deep learning studies gathered in this review developed models with high prediction accuracy, but not accompanied with evidence-based diagnosis. Hence, this results in a large degree of behavioral and product change, as neurologists will have to forgo evidence-based diagnosis if they switch from visual inspection to rely on CAD tools for PD diagnosis. As a consequence, the current deep learning models developed by various study in this review falls in the ‘Sure failures’ category in Figure 16, discouraging its adoption into healthcare. The inclusion of visual cues in the deep learning model, thus, decreases the degree of behavioral change to ‘low’ as the deep learning models had segmented the brain abnormalities for the neurologist to inspect the brain images with greater ease. Also, this will greatly boost the neurologist’s confidence in deep learning models, especially when their prediction coincides with the CAD tool. Therefore, the inclusion of visual cues as a function may allow deep learning-based CAD tools to switch from the ‘Sure failures’ category to ‘Smash hits’, which greatly encourage the adoption of CAD tools and ensures the long-term and short-term success of an innovative product [84].

4.4. Solutions to Promote Adoption of CAD

With the acceptance of the CAD-based tool, the authors hope to alleviate the manualized work burden of neurologists and other healthcare workers. As such, individuals affected by PD can also play a part by performing self-assessment with the aid of a CAD tool. This could also encourage individuals to seek professional help when the CAD tool predicted a positive on PD and urge that medical attention is required. Figure 17 is an example of a cloud-based CAD tool in which data can be assessed by any electronic device with access to the internet like smartphones and computers. An individual who suspects that they may have PD can either use their smartphone to conduct handwriting test, voice recording to detect speech aberration, or take a video of their walking cycle to perform gait analysis. These recorded pieces of evidence are useful information for the neurologist to confirm a diagnosis, which helps to increase efficiency and reduce the waiting time for diagnosis. In addition, handwriting, speech, and gait analysis are potential telemonitoring alternatives. Brain imaging like SPECT, PET, and MRI is heavy machinery that is not practical to be placed at home. Recording devices to monitor physiological signals like EEG and EMG are not common possessions in today’s households either. Hence, it is more practical to monitor PD progression thru a smartphone that has built-in handwriting, speech, and video recording function.
In this review, the authors have only demonstrated that deep learning models are promising CAD tools for PD diagnosis. However, a practical CAD tool should ideally be able to identify multiple diseases instead of PD alone. Hence, we hope deep learning studies for other neurological diseases could also heed our advice and include visual cues as a function in their system. As such, we can develop deep learning models into a clinically trusted CAD tool for clinical decision support. Thereby taking deep learning models a step further into adoption in healthcare and enter a new phase of application in the health informatics industry.

4.5. Limitation of This Study

In spite of major contributions made through a detailed synthesis of the most relevant information on deep learning methods for clinical diagnosis purposes, this review comes with some limitations, as follows.
  • Deep learning studies for each modality (MRI, EEG, speech, etc.,), may use different datasets to train their model. For example, studies interested in MRI may use a private dataset instead of the public dataset, PPMI. Hence, it could become rather difficult to compare the performance of two deep learning models that do not train with the same dataset.
  • There is a potential lack of studies for ultrasound imaging, small movement-related tests, and multi-model analysis which involves more than one modality. This makes it difficult to determine the best-performing model for these three categories.
  • The wide variety of deep learning models proposed for gait analysis also makes it challenging to determine the best performing model, hence, it is difficult to decide between the top three best performing models: CNN-LSTM, DNN, and LSTM.

5. Conclusions

PD requires early diagnosis and intervention to minimize the impact of this degenerative condition and ensure that affected individuals can remain self-sufficient as long as possible. However, the imprecise nature of clinical diagnoses, and a lack of neurologists expert in PD diagnosis worldwide, often results in delayed diagnosis and suboptimal management of PD. Moreover, the likely success of advanced therapeutics such as gene therapy, currently under development, will be heavily influenced by early diagnosis. Thus, a CAD tools based on deep learning models should be considered to alleviate the work burden of neurologists if they can perform fast and accurate PD diagnoses. In this study, we have reviewed 63 studies on deep learning for various modalities such as brain analysis (SPECT, PET, MRI, and EEG) and motion symptoms (gait, handwriting, speech, EMG). We show that deep learning models can achieve high prediction accuracy for PD, especially the CNN model that is widely proposed by studies that had focused on image classification for brain imaging and handwriting analysis. The CNN model also performed well in one-dimensional signals like EEG and speech analysis. However, deep learning models have yet to be supported by end-users such as neurologists and other clinicians due to a lack of evidence regarding disease prediction. Hence, this review aims to propose new solutions for future deep learning studies, and perhaps the inclusion of visual cues, such as the segmentation of abnormal areas, as a function in the deep learning model architecture. We also urge that researchers continue to build deep learning models with specific applications to some of the other disease detection problems and include visual cues in their model. It is hoped that researchers will be encouraged to adopt more explainable and interpretable methods in deep learning-based CAD tools, which can then be taken up by the end-users, and improve the health care outcomes for a growing number of individuals affected by PD worldwide.

Author Contributions

Conceptualization, H.W.L., U.R.A. and W.H.; methodology, H.W.L. and W.H.; formal analysis, H.W.L. and W.H.; investigation, H.W.L. and W.H.; writing—original draft preparation, H.W.L. and W.H.; writing—review and editing, C.P.O., S.C., P.D.B., R.C.D., J.S., E.E.P. and U.R.A.; data visualization, H.W.L. and W.H.; supervision, U.R.A.; project administration, H.W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

Authors declare no conflict of interest.

Appendix A

Table A1. List of deep learning studies for various modalities in brain analysis.
Table A1. List of deep learning studies for various modalities in brain analysis.
YearAuthorInput FeatureApproachDatasetAccuracy (%)
MRI
2019Xiao et al. [85]Quantitative susceptibility mapping (QSM) imagesCNN87 PD; 53 HC
(private)
89.0
2021Yasaka et al. [86]radial kurtosis (RK) Connectrome matrixCNN115 PD; 115 HC
(private)
81.0
2020Chakraborty et al. [42]Normalized MRI imagesCNN203 PD; 203 HC
(PPMI)
95.3
2020Tremblay et al. [87]T2-weighted imagingCNN15 PD; 15 HC
(private)
88.3
2019Shinde et al. [88]a boxed region around the brain-stem on
the axial slices of the NMS-MRI as input
CNN
(ResNet50)
45 PD; 35 HC
(private)
80.0
PET/CT
2021Piccardo et al. [41][18F]DOPA PET/CT imagesCNN (3D)43 PD; 55 HC
(private)
93.0
2019Shen et al. [89]laconic representation of PET imagesGroup Lasso Sparse Deep Belief Network (GLS-DBN)125 PD; 225 HC
(private)
90.0
2019Dai et al. [90]Enhanced Pet imagesCNN
(U-net)
214 PD; 127 HC
(PPMI)
84.2
SPECT/DaTscan
2015Hirschauer et al. [91]Inputs from all 8 diagnostic test in databaseEnhanced probabilistic neural network (EPNN)189 PD; 415 HC
(PPMI)
98.6
2020Ozsahin et al. [40]Binarized imagesback propagation neural network (BPNN)1.334 PD; 212 HC
(PPMI)
99.6
2017Choi et al. [37]Normalized SPECT imagesCNN431 PD; 193 HC
(combination of 2 database)
98.8
2020Magesh et al. [92]Normalized SPECT imagesCNN (VGG16)430 PD; 212 HC
(PPMI)
95.2
2020Chien et al. [93]segmented striatal region imagesCNN234 PD; 145 HC
(private)
86.0
2020Hsu et al. [94]Grayscale + colour SPECT imagesCNN (VGG)196 PD; 6 HC
(private)
85.0
2019Ortiz et al. [95]Voxel feature extracted via isosurfacesCNN (3D)158 PD; 111 HC
(PPMI)
95.1
2018Martinez-Murcia at al. [96]Normalized DaTSCAN imagesCNN
(ALEXNET)
448 PD; 194 HC
(PPMI)
94.1
Ultrasound
2018Shen et al. [43]73 features extracted from Transcranial sonography (TCS) imageMEKM-BLS76 PD; 77HC
(private)
78.4
EEG
2020Xu et al. [49]end-to-end EEG signalspooling-based deep recurrent neural network (PDRNN)10 PD; 10 HC
(private)
88.6
2021Lee et al. [51]spatiotemporal features of EEG signalsCRNN-99.2
2021Loh et al. [25]Spectrograms imagesCNN15 PD; 16 HC
(public)
99.5
2021Khare et al. [47]smoothed pseudo-Wigner Ville distributionCNN15 PD; 16 HC
(public)
100
2018Oh et al. [48]end-to-end EEG signals13-layer 1D-CNN20 PD; 20 HC
(private)
88.3
2020Shah et al. [50]-DNN-99.2
Table A2. List of deep learning studies for various modalities in motor symptoms.
Table A2. List of deep learning studies for various modalities in motor symptoms.
YearAuthorInput FeatureApproachDatasetAccuracy (%)
Gait
2019Xia et al. [53]Multi-points Vertical Ground Reaction Force (VGRF) time seriesCNN-LSTM93 PD; 73 HC
(public)
99.1
2016Nancy Jane et al. [97]Temporal sequence of walking patternQ-BTDNN93 PD; 73 HC
(public)
93.1 [Ga]91.7 [Si]89.7 [Ju]
2020Som et al. [98]Reduced feature via PCAAutoencoder18 PD; 16 HC
(public)
73.8
2020Zhang et al. [99]Normalization and Data AugmentationCNN656 PD; 2148 HC (public)86.0
2020Maachi et al. [58]18 1D-signalsDNN93 PD; 73 HC
(public)
98.7
2021Balaji et al. [59]the gait kinematic featuresLSTM-98.6
2020Yurdakul et al. [100]NR-LBPANN93 PD; 73 HC
(public)
98.3
2018Zhao et al. [54]19 featuresCNN-LSTM93PD; 73 HC
(public)
98.0
2016Zeng et al. [101]19 featuresRBF-NN93PD; 73 HC
(public)
96.4
2020Alharthi et al. [102]ground reaction forceCNN93 PD; 73 HC
(public)
95.5
2020Butt et al. [103]kinematic featuresLSTM64 PD; 50 HC
(private)
82.4
Handwriting
2021Folador et al. [104]histograms of oriented gradients (HOG)CNN20 PD; 20 HC83.1
2019Yang et al. [105]key parameters deviation (cm) and accumulation angle (rad)GRNN21 PD; 24 HC98.9
2020Canturk et al. [106]Fuzzy recurrence plot (FRP)CNN25 PD; 15 HC94.0
2019Gil-Martín et al. [107]CNN based featuresCNN62 PD; 15 HC96.5
2019Naseer et al. [108]CNN based featuresCNN37 PD; 38 HC98.3
2021Gazda et al. [109]handwriting imagesCNN-94.7
2020Kamran et al. [65]CNN based featuresCNNPaHaW dataset[38/37],
HandPD dataset[18/74],
NewHandPD dataset[35/31]
Parkinsons Drawing
99.2
2018Pereira et al. [110]CNN based featuresCNN74 PC; 18 HC95.0
2018Afonso et al. [111]recurrence plots to map the signals onto the image domainCNN14 PD; 21 HC87.0
2019Ribeiro et al. [112]Bags of SamplingRNN14 PD; 21 HC97.0
2019Diaz et al. [113]Generate enhanced imagesCNN37 PD; 38 HC86.67
2021Diaz et al. [114]Kinematic and pressure featuresCNN-RNNPaHaW dataset[38/37],
NewHandPD dataset[35/31]
90.0
2020Nomm et al. [115]Image of a drawn spiral enhanced by the velocity
and pressure parameters
CNN17 PD; 17 HC93
Movement
2018Prince et al. [71]touch-screen and accelerometer waveformsCNN949 PD; 866 HC62.1
2017Jones et al. [70]Temporal Manometric and videofluoroscopic dataANN31 PD; 31 HC82.3
Speech
2018Putri et al. [76]Various voice measurementsANN15 PD; 8 HC94.4
2019Ali et al. [75]dimensionality reduction of all 26 features by LDAGONN20 PD; 20 HC
(Sakar, 2013)
100
2015Peker et al. [116]12 features selected by minimum redundancy maximum relevance (mRMR) attribute selection
algorithm
CVANN23 PD; 8 HC
(Little, 2007)
98.1
2019Wodzinski et al. [117]Spectrograms imagesCNN50 PD; 50 HC
(PC-GITA)
91.7
2016Avci et al. [118]22 biomedical voice measurementsELM23 PD; 8 HC
(Little, 2007)
96.8
2017Gómez-Vilda et al. [119]absolute kinematic velocity (AKV) distributionRLSFN53 PD; 26 HC
(Male)
38 PD; 25 HC
(Female)
99.4
2020Nagasubramanian et al. [73]All 26 featuresCNN20 PD; 20 HC
(Sakar, 2013)
99.5
2020Xu et al. [120]Spectrograms imagesCNN20 PD; 20 HC
(Sakar, 2013)
91.2
2021Karaman et al. [121]CNN based featuresCNNmPower Voice database91.17
2011Åström et al. [122]10 vocal featuresDNN23 PD; 8 HC
(Little, 2007)
91.2
2021Narendra et al. [123]raw speech and voice source waveformsCNN50 PD; 50 HC
(PC-GITA)
68.6
2021Goyal et al. [74]A combination of Resonance based Sparse Signal Decomposition (RSSD) + Time-Frequency (T-F) algorithmCNN16 PD; 21 HC
and 20 HC
99.4
EMG
2018Putri et al. [76]12 EMG featuresANN15 PD; 8 HC71.0
Mixture of inputs
2018Vasquez-Correa et al. [77]Spectrograms imagesCNN44 PD; 40 HC97.6
2017Oung et al. [78]Empirical Wavelet Transform Based FeaturesELM50 PD; 15 HC95.93

References

  1. Politis, M.; Wu, K.; Molloy, S.; Bain, P.G.; Chaudhuri, K.R.; Piccini, P. Parkinson’s disease symptoms: The patient’s perspective. Mov. Disord. 2010, 25, 1646–1651. [Google Scholar] [CrossRef] [PubMed]
  2. Balestrino, R.; Schapira, A.H.V. Parkinson disease. Eur. J. Neurol. 2020, 27, 27–42. [Google Scholar] [CrossRef] [PubMed]
  3. Bhat, S.; Acharya, U.R.; Hagiwara, Y.; Dadmehr, N.; Adeli, H. Parkinson’s disease: Cause factors, measurable indicators, and early diagnosis. Comput. Biol. Med. 2018, 102, 234–241. [Google Scholar] [CrossRef] [PubMed]
  4. Dorsey, E.R.; Elbaz, A.; Nichols, E.; Abd-Allah, F.; Abdelalim, A.; Adsuar, J.C.; Ansha, M.G.; Brayne, C.; Choi, J.-Y.J.; Collado-Mateo, D.; et al. Global, regional, and national burden of Parkinson’s disease, 1990–2016: A systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol. 2018, 17, 939–953. [Google Scholar] [CrossRef] [Green Version]
  5. Bloem, B.R.; Okun, M.S.; Klein, C. Parkinson’s disease. Lancet 2021, 397, 2284–2303. [Google Scholar] [CrossRef]
  6. Szász, J.A.; Orbán-Kis, K.; Constantin, V.A.; Péter, C.; Bíró, I.; Mihály, I.; Szegedi, K.; Balla, A.; Szatmári, S. Therapeutic strategies in the early stages of Parkinson’s disease: A cross-sectional evaluation of 15 years’ experience with a large cohort of Romanian patients. Neuropsychiatr. Dis. Treat. 2019, 15, 831–838. [Google Scholar] [CrossRef] [Green Version]
  7. Dangouloff, T.; Servais, L. Clinical evidence supporting early treatment of patients with spinal muscular atrophy: Current perspectives. Ther. Clin. Risk Manag. 2019, 15, 1153–1161. [Google Scholar] [CrossRef] [Green Version]
  8. Berardelli, A.; Wenning, G.K.; Antonini, A.; Berg, D.; Bloem, B.R.; Bonifati, V.; Brooks, D.; Burn, D.J.; Colosimo, C.; Fanciulli, A.; et al. EFNS/MDS-ES recommendations for the diagnosis of Parkinson’s disease. Eur. J. Neurol. 2013, 20, 16–34. [Google Scholar] [CrossRef]
  9. Rizzo, G.; Copetti, M.; Arcuti, S.; Martino, D.; Fontana, A.; Logroscino, G. Accuracy of clinical diagnosis of Parkinson disease. Neurology 2016, 86, 566–576. [Google Scholar] [CrossRef]
  10. Burton, A. How do we fix the shortage of neurologists? Lancet Neurol. 2018, 17, 502–503. [Google Scholar] [CrossRef] [Green Version]
  11. Segato, A.; Marzullo, A.; Calimeri, F.; De Momi, E. Artificial intelligence for brain diseases: A systematic review. APL Bioeng. 2020, 4, 041503. [Google Scholar] [CrossRef]
  12. Raghavendra, U.; Acharya, U.R.; Adeli, H. Artificial Intelligence techniques for automated diagnosis of neurological disorders. Eur. Neurol. 2019, 82, 41–64. [Google Scholar] [CrossRef]
  13. Yuvaraj, R.; Murugappan, M.; Acharya, U.R.; Adeli, H.; Ibrahim, N.M.; Mesquita, E. Brain functional connectivity patterns for emotional state classification in Parkinson’s disease patients without dementia. Behav. Brain Res. 2016, 298, 248–260. [Google Scholar] [CrossRef]
  14. Tuncer, T.; Dogan, S.; Acharya, U.R. Automated detection of Parkinson’s disease using minimum average maximum tree and singular value decomposition method with vowels. Biocybern. Biomed. Eng. 2020, 40, 211–220. [Google Scholar] [CrossRef]
  15. Faust, O.; Razaghi, H.; Barika, R.; Ciaccio, E.J.; Acharya, U.R. A review of automated sleep stage scoring based on physiological signals for the new millennia. Comput. Methods Programs Biomed. 2019, 176, 81–91. [Google Scholar] [CrossRef]
  16. Loh, H.W.; Ooi, C.P.; Vicnesh, J.; Oh, S.L.; Faust, O.; Gertych, A.; Acharya, U.R. Automated detection of sleep stages using deep learning techniques: A systematic review of the last decade (2010–2020). Appl. Sci. 2020, 10, 8963. [Google Scholar] [CrossRef]
  17. Khare, S.K.; Bajaj, V.; Acharya, U.R. Detection of Parkinson’s disease using automated tunable Q wavelet transform technique with EEG signals. Biocybern. Biomed. Eng. 2021, 41, 679–689. [Google Scholar] [CrossRef]
  18. Bhurane, A.A.; Dhok, S.; Sharma, M.; Yuvaraj, R.; Murugappan, M.; Acharya, U.R. Diagnosis of Parkinson’s disease from electroencephalography signals using linear and self-similarity features. Expert Syst. 2019, e12472. [Google Scholar] [CrossRef]
  19. Yuvaraj, R.; Rajendra Acharya, U.R.; Hagiwara, Y. A novel Parkinson’s disease diagnosis index using higher-order spectra features in EEG signals. Neural Comput. Appl. 2018, 30, 1225–1235. [Google Scholar] [CrossRef]
  20. Mirza, B.; Wang, W.; Wang, J.; Choi, H.; Chung, N.C.; Ping, P. Machine learning and integrative analysis of biomedical big data. Genes 2019, 10, 87. [Google Scholar] [CrossRef] [Green Version]
  21. Taylor, J.; Fenner, J. The challenge of clinical adoption—The insurmountable obstacle that will stop machine learning? BJR Open 2019, 1, 20180017. [Google Scholar] [CrossRef]
  22. Varghese, J. Artificial intelligence in medicine: Chances and challenges for wide clinical adoption. Visc. Med. 2020, 36, 443–449. [Google Scholar] [CrossRef]
  23. Lee, J.-G.; Jun, S.; Cho, Y.-W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep learning in medical imaging: General overview. Korean J. Radiol. 2017, 18, 570. [Google Scholar] [CrossRef] [Green Version]
  24. Balderas Silva, D.; Ponce Cruz, P.; Molina Gutierrez, A. Are the long–short term memory and convolution neural networks really based on biological systems? ICT Express 2018, 4, 100–106. [Google Scholar] [CrossRef]
  25. Loh, H.; Ooi, C.; Palmer, E.; Barua, P.; Dogan, S.; Tuncer, T.; Baygin, M.; Acharya, U. GaborPDNet: Gabor transformation and deep neural network for Parkinson’s disease detection using EEG signals. Electronics 2021, 10, 1740. [Google Scholar] [CrossRef]
  26. Sarvamangala, D.R.; Kulkarni, R.V. Convolutional neural networks in medical image understanding: A survey. Evol. Intell. 2021, 1–22. [Google Scholar] [CrossRef]
  27. Fan, J.; Xu, W.; Wu, Y.; Gong, Y. Human tracking using convolutional neural networks. IEEE Trans. Neural Netw. 2010, 21, 1610–1623. [Google Scholar] [CrossRef]
  28. Lu, J.; Liong, V.E.; Wang, G.; Moulin, P. Joint feature learning for face recognition. IEEE Trans. Inf. Forensics Secur. 2015, 10, 1371–1383. [Google Scholar] [CrossRef]
  29. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  30. Jiang, C.; Chen, Y.; Chen, S.; Bo, Y.; Li, W.; Tian, W.; Jun, G. A mixed deep recurrent neural network for MEMS gyroscope noise suppressing. Electronics 2019, 8, 181. [Google Scholar] [CrossRef] [Green Version]
  31. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef] [PubMed]
  32. Coto-Jiménez, M. Improving post-filtering of artificial speech using pre-trained LSTM neural networks. Biomimetics 2019, 4, 39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Graves, A.; Liwicki, M.; Fernandez, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. A Novel connectionist system for unconstrained handwriting recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 855–868. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Nabipour, M.; Nayyeri, P.; Jabani, H.; Mosavi, A.; Salwana, E.; Shahab, S. Deep learning for stock market prediction. Entropy 2020, 22, 840. [Google Scholar] [CrossRef] [PubMed]
  35. Qiu, J.; Wang, B.; Zhou, C. Forecasting stock prices with long-short term memory neural network based on attention mechanism. PLoS ONE 2020, 15, e0227222. [Google Scholar] [CrossRef]
  36. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [Green Version]
  37. Choi, H.; Ha, S.; Im, H.J.; Paek, S.H.; Lee, D.S. Refining diagnosis of Parkinson’s disease with deep learning-based interpretation of dopamine transporter imaging. NeuroImage Clin. 2017, 16, 586–594. [Google Scholar] [CrossRef]
  38. Garibotto, V.; Montandon, M.L.; Viaud, C.T.; Allaoua, M.; Assal, F.; Burkhard, P.R.; Ratib, O.; Zaidi, H. Regions of interest–based discriminant analysis of DaTSCAN SPECT and FDG-PET for the classification of dementia. Clin. Nucl. Med. 2013, 38, e112–e117. [Google Scholar] [CrossRef]
  39. Meyer, P.T.; Frings, L.; Rücker, G.; Hellwig, S. 18 F-FDG PET in Parkinsonism: Differential diagnosis and evaluation of cognitive impairment. J. Nucl. Med. 2017, 58, 1888–1898. [Google Scholar] [CrossRef] [Green Version]
  40. Ozsahin, I.; Sekeroglu, B.; Pwavodi, P.C.; Mok, G.S.P. High-accuracy automated diagnosis of Parkinson’s disease. Curr. Med. Imaging Former. Curr. Med. Imaging Rev. 2020, 16, 688–694. [Google Scholar] [CrossRef]
  41. Piccardo, A.; Cappuccio, R.; Bottoni, G.; Cecchin, D.; Mazzella, L.; Cirone, A.; Righi, S.; Ugolini, M.; Bianchi, P.; Bertolaccini, P.; et al. The role of the deep convolutional neural network as an aid to interpreting brain [18F]DOPA PET/CT in the diagnosis of Parkinson’s disease. Eur. Radiol. 2021, 31, 7003–7011. [Google Scholar] [CrossRef]
  42. Chakraborty, S.; Aich, S.; Kim, H.-C. Detection of Parkinson’s disease from 3T T1 weighted MRI scans using 3D convolutional neural network. Diagnostics 2020, 10, 402. [Google Scholar] [CrossRef]
  43. Shen, L.; Shi, J.; Gong, B.; Zhang, Y.; Dong, Y.; Zhang, Q.; An, H. Multiple empirical kernel mapping based broad learning system for classification of Parkinson’s disease with transcranial sonography. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 3132–3135. [Google Scholar] [CrossRef]
  44. Mehnert, S.; Reuter, I.; Schepp, K.; Maaser, P.; Stolz, E.; Kaps, M. Transcranial sonography for diagnosis of Parkinson’s disease. BMC Neurol. 2010, 10, 9. [Google Scholar] [CrossRef] [Green Version]
  45. Barua, P.D.; Dogan, S.; Tuncer, T.; Baygin, M.; Acharya, U.R. Novel automated PD detection system using aspirin pattern with EEG signals. Comput. Biol. Med. 2021, 137, 104841. [Google Scholar] [CrossRef]
  46. Soikkeli, R.; Partanen, J.; Soininen, H.; Pääkkönen, A.; Riekkinen, P. Slowing of EEG in Parkinson’s disease. Electroencephalogr. Clin. Neurophysiol. 1991, 79, 159–165. [Google Scholar] [CrossRef]
  47. Khare, S.K.; Bajaj, V.; Acharya, U.R. PDCNNet: An automatic framework for the detection of Parkinson’s disease using EEG signals. IEEE Sens. J. 2021, 21, 15. [Google Scholar] [CrossRef]
  48. Oh, S.L.; Hagiwara, Y.; Raghavendra, U.; Yuvaraj, R.; Arunkumar, N.; Murugappan, M.; Acharya, U.R. A deep learning approach for Parkinson’s disease diagnosis from EEG signals. Neural Comput. Appl. 2020, 32, 10927–10933. [Google Scholar] [CrossRef]
  49. Xu, S.; Wang, Z.; Sun, J.; Zhang, Z.; Wu, Z.; Yang, T.; Xue, G.; Cheng, C. Using a deep recurrent neural network with EEG signal to detect Parkinson’s disease. Ann. Transl. Med. 2020, 8, 874. [Google Scholar] [CrossRef]
  50. Shah, S.A.A.; Zhang, L.; Bais, A. Dynamical system based compact deep hybrid network for classification of Parkinson disease related EEG signals. Neural Netw. 2020, 130, 75–84. [Google Scholar] [CrossRef]
  51. Lee, S.; Hussein, R.; Ward, R.; Jane Wang, Z.; McKeown, M.J. A convolutional-recurrent neural network approach to resting-state EEG classification in Parkinson’s disease. J. Neurosci. Methods 2021, 361, 109282. [Google Scholar] [CrossRef]
  52. Di Biase, L.; Di Santo, A.; Caminiti, M.L.; De Liso, A.; Shah, S.A.; Ricci, L.; Di Lazzaro, V. Gait analysis in Parkinson’s disease: An overview of the most accurate markers for diagnosis and symptoms monitoring. Sensors 2020, 20, 3529. [Google Scholar] [CrossRef]
  53. Xia, Y.; Yao, Z.; Ye, Q.; Cheng, N. A dual-modal attention-enhanced deep learning network for quantification of Parkinson’s disease characteristics. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 42–51. [Google Scholar] [CrossRef]
  54. Zhao, A.; Qi, L.; Li, J.; Dong, J.; Yu, H. A hybrid spatio-temporal model for detection and severity rating of Parkinson’s disease from gait data. Neurocomputing 2018, 315, 1–8. [Google Scholar] [CrossRef] [Green Version]
  55. Yogev, G.; Giladi, N.; Peretz, C.; Springer, S.; Simon, E.S.; Hausdorff, J.M. Dual tasking, gait rhythmicity, and Parkinson’s disease: Which aspects of gait are attention demanding? Eur. J. Neurosci. 2005, 22, 1248–1256. [Google Scholar] [CrossRef]
  56. Hausdorff, J.M.; Lowenthal, J.; Herman, T.; Gruendlinger, L.; Peretz, C.; Giladi, N. Rhythmic auditory stimulation modulates gait variability in Parkinson’s disease. Eur. J. Neurosci. 2007, 26, 2369–2375. [Google Scholar] [CrossRef]
  57. Frenkel-Toledo, S.; Giladi, N.; Peretz, C.; Herman, T.; Gruendlinger, L.; Hausdorff, J.M. Treadmill walking as an external pacemaker to improve gait rhythm and stability in Parkinson’s disease. Mov. Disord. 2005, 20, 1109–1114. [Google Scholar] [CrossRef]
  58. El Maachi, I.; Bilodeau, G.-A.; Bouachir, W. Deep 1D-Convnet for accurate Parkinson disease detection and severity prediction from gait. Expert Syst. Appl. 2020, 143, 113075. [Google Scholar] [CrossRef]
  59. Balaji, E.; Brindha, D.; Elumalai, V.K.; Vikrama, R. Automatic and non-invasive Parkinson’s disease diagnosis and severity rating using LSTM network. Appl. Soft Comput. 2021, 108, 107463. [Google Scholar] [CrossRef]
  60. Thomas, M.; Lenka, A.; Kumar Pal, P. Handwriting analysis in Parkinson’s disease: Current status and future directions. Mov. Disord. Clin. Pract. 2017, 4, 806–818. [Google Scholar] [CrossRef]
  61. McLennan, J.E.; Nakano, K.; Tyler, H.R.; Schwab, R.S. Micrographia in Parkinson’s disease. J. Neurol. Sci. 1972, 15, 141–152. [Google Scholar] [CrossRef]
  62. Drotár, P.; Mekyska, J.; Rektorová, I.; Masarová, L.; Smékal, Z.; Faundez-Zanuy, M. Evaluation of handwriting kinematics and pressure for differential diagnosis of Parkinson’s disease. Artif. Intell. Med. 2016, 67, 39–46. [Google Scholar] [CrossRef] [PubMed]
  63. Pereira, C.R.; Pereira, D.R.; Silva, F.A.; Masieiro, J.P.; Weber, S.A.T.; Hook, C.; Papa, J.P. A new computer vision-based approach to aid the diagnosis of Parkinson’s disease. Comput. Methods Programs Biomed. 2016, 136, 79–88. [Google Scholar] [CrossRef] [PubMed]
  64. Pereira, C.R.; Weber, S.A.T.; Hook, C.; Rosa, G.H.; Papa, J.P. Deep learning-aided Parkinson’s disease diagnosis from handwritten dynamics. In Proceedings of the 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Sao Paulo, Brazil, 4–7 October 2016; pp. 340–346. [Google Scholar] [CrossRef]
  65. Kamran, I.; Naz, S.; Razzak, I.; Imran, M. Handwriting dynamics assessment using deep neural network for early identification of Parkinson’s disease. Future Gener. Comput. Syst. 2021, 117, 234–244. [Google Scholar] [CrossRef]
  66. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  67. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  68. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  69. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  70. Jones, C.A.; Hoffman, M.R.; Lin, L.; Abdelhalim, S.; Jiang, J.J.; McCulloch, T.M. Identification of swallowing disorders in early and mid-stage Parkinson’s disease using pattern recognition of pharyngeal high-resolution manometry data. Neurogastroenterol. Motil. 2018, 30, e13236. [Google Scholar] [CrossRef]
  71. Prince, J.; de Vos, M. A deep learning framework for the remote detection of Parkinson’S Disease using smart-phone sensor data. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 17–21 July 2018; pp. 3144–3147. [Google Scholar] [CrossRef]
  72. Tjaden, K. Speech and swallowing in Parkinson’s disease. Top. Geriatr. Rehabil. 2008, 24, 115–126. [Google Scholar] [CrossRef]
  73. Nagasubramanian, G.; Sankayya, M. Multi-variate vocal data analysis for detection of Parkinson disease using deep learning. Neural Comput. Appl. 2021, 33, 4849–4864. [Google Scholar] [CrossRef]
  74. Goyal, J.; Khandnor, P.; Aseri, T.C. A hybrid approach for Parkinson’s disease diagnosis with resonance and time-frequency based features from speech signals. Expert Syst. Appl. 2021, 182, 115283. [Google Scholar] [CrossRef]
  75. Ali, L.; Zhu, C.; Zhang, Z.; Liu, Y. Automated Detection of Parkinson’s disease based on multiple types of sustained phonations using linear discriminant analysis and genetically optimized neural network. IEEE J. Transl. Eng. Health Med. 2019, 1–10. [Google Scholar] [CrossRef]
  76. Putri, F.; Caesarendra, W.; Pamanasari, E.D.; Ariyanto, M.; Setiawan, J.D. Parkinson disease detection based on voice and EMG pattern classification method for Indonesian case study. J. Energy Mech. Mater. Manuf. Eng. 2018, 3, 87. [Google Scholar] [CrossRef]
  77. Vasquez-Correa, J.C.; Arias-Vergara, T.; Orozco-Arroyave, J.R.; Eskofier, B.; Klucken, J.; Noth, E. Multimodal assessment of Parkinson’s disease: A deep learning approach. IEEE J. Biomed. Health Inform. 2019, 23, 1618–1630. [Google Scholar] [CrossRef]
  78. Oung, Q.W.; Muthusamy, H.; Basah, S.N.; Lee, H.; Vijean, V. Empirical Wavelet transform based features for classification of Parkinson’s disease severity. J. Med. Syst. 2018, 42, 29. [Google Scholar] [CrossRef]
  79. Ding, S.; Zhao, H.; Zhang, Y.; Xu, X.; Nie, R. Extreme learning machine: Algorithm, theory and applications. Artif. Intell. Rev. 2015, 44, 103–115. [Google Scholar] [CrossRef]
  80. Panch, T.; Mattie, H.; Celi, L.A. The ‘inconvenient truth’ about AI in healthcare. NPJ Digit. Med. 2019, 2, 77. [Google Scholar] [CrossRef]
  81. Melnychenko, O. Is artificial intelligence ready to assess an enterprise’s financial security? J. Risk Financ. Manag. 2020, 13, 191. [Google Scholar] [CrossRef]
  82. Tavares, J.; Ong, F.S.; Ye, T.; Xue, J.; He, M.; Gu, J.; Lin, H.; Xu, B.; Cheng, Y. Psychosocial factors affecting artificial intelligence adoption in health care in China: Cross-sectional study. J. Med. Internet Res. 2019, 21, e14316. [Google Scholar] [CrossRef] [Green Version]
  83. Butler, D. Translational research: Crossing the valley of death. Nature 2008, 453, 840–842. [Google Scholar] [CrossRef]
  84. Gourville, J.T. Eager sellers and stony buyers: Understanding the psychology of new-product adoption. Harv. Bus. Rev. 2006, 84, 98–106, 145. Available online: http://www.ncbi.nlm.nih.gov/pubmed/16770897 (accessed on 12 October 2021).
  85. Xiao, B.; He, N.; Wang, Q.; Cheng, Z.; Jiao, Y.; Haacke, E.M.; Yan, F.; Shi, F. Quantitative susceptibility mapping based hybrid feature extraction for diagnosis of Parkinson’s disease. NeuroImage Clin. 2019, 24, 102070. [Google Scholar] [CrossRef]
  86. Yasaka, K.; Kamagata, K.; Ogawa, T.; Hatano, T.; Takeshige-Amano, H.; Ogaki, K.; Andica, C.; Akai, H.; Kunimatsu, A.; Uchida, W.; et al. Parkinson’s disease: Deep learning with a parameter-weighted structural connectome matrix for diagnosis and neural circuit disorder investigation. Neuroradiology 2021, 63, 1451–1462. [Google Scholar] [CrossRef]
  87. Tremblay, C.; Mei, J.; Frasnelli, J. Olfactory bulb surroundings can help to distinguish Parkinson’s disease from non-parkinsonian olfactory dysfunction. NeuroImage Clin. 2020, 28, 102457. [Google Scholar] [CrossRef]
  88. Shinde, S.; Prasad, S.; Saboo, Y.; Kaushick, R.; Saini, J.; Pal, P.K.; Ingalhalikar, M. Predictive markers for Parkinson’s disease using deep neural nets on neuromelanin sensitive MRI. NeuroImage Clin. 2019, 22, 101748. [Google Scholar] [CrossRef]
  89. Shen, T.; Jiang, J.; Lin, W.; Ge, J.; Wu, P.; Zhou, Y.; Zuo, C.; Wang, J.; Yan, Z.; Shi, K. Use of overlapping group LASSO sparse deep belief network to discriminate Parkinson’s disease and normal control. Front. Neurosci. 2019, 13. [Google Scholar] [CrossRef] [Green Version]
  90. Dai, Y.; Tang, Z.; Wang, Y.; Xu, Z. Data driven intelligent diagnostics for Parkinson’s disease. IEEE Access 2019, 7, 106941–106950. [Google Scholar] [CrossRef]
  91. Hirschauer, T.J.; Adeli, H.; Buford, J.A. Computer-aided diagnosis of Parkinson’s disease using enhanced probabilistic neural network. J. Med. Syst. 2015, 39, 179. [Google Scholar] [CrossRef]
  92. Magesh, P.R.; Myloth, R.D.; Tom, R.J. An explainable machine learning model for early detection of Parkinson’s disease using LIME on DaTSCAN imagery. Comput. Biol. Med. 2020, 126, 104041. [Google Scholar] [CrossRef]
  93. Chien, C.-Y.; Hsu, S.-W.; Lee, T.-L.; Sung, P.-S.; Lin, C.-C. Using artificial neural network to discriminate Parkinson’s disease from other Parkinsonisms by focusing on putamen of dopamine transporter SPECT images. Biomedicines 2020, 9, 12. [Google Scholar] [CrossRef]
  94. Hsu, S.-Y.; Yeh, L.-R.; Chen, T.-B.; Du, W.-C.; Huang, Y.-H.; Twan, W.-H.; Lin, M.-C.; Hsu, Y.-H.; Wu, Y.-C.; Chen, H.-Y. Classification of the multiple stages of Parkinson’s Disease by a deep convolution neural network based on 99mTc-TRODAT-1 SPECT images. Molecules 2020, 25, 4792. [Google Scholar] [CrossRef]
  95. Ortiz, A.; Munilla, J.; Martínez-Ibañez, M.; Górriz, J.M.; Ramírez, J.; Salas-Gonzalez, D. Parkinson’s disease detection using isosurfaces-based features and convolutional neural networks. Front. Neuroinform. 2019, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Martinez-Murcia, F.J.; Górriz, J.M.; Ramírez, J.; Ortiz, A. Convolutional neural networks for neuroimaging in Parkinson’s disease: Is preprocessing needed? Int. J. Neural Syst. 2018, 28, 1850035. [Google Scholar] [CrossRef] [PubMed]
  97. Nancy Jane, Y.; Khanna Nehemiah, H.; Arputharaj, K. A Q-backpropagated time delay neural network for diagnosing severity of gait disturbances in Parkinson’s disease. J. Biomed. Inform. 2016, 60, 169–176. [Google Scholar] [CrossRef] [PubMed]
  98. Som, A.; Krishnamurthi, N.; Buman, M.; Turaga, P. Unsupervised pre-trained models from healthy ADLs improve Parkinson’s disease classification of gait patterns. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 784–788. [Google Scholar] [CrossRef]
  99. Zhang, H.; Deng, K.; Li, H.; Albin, R.L.; Guan, Y. Deep learning identifies digital biomarkers for self-reported Parkinson’s disease. Patterns 2020, 1, 100042. [Google Scholar] [CrossRef] [PubMed]
  100. Yurdakul, O.C.; Subathra, M.S.P.; George, S.T. detection of parkinson’s disease from gait using neighborhood representation local binary patterns. Biomed. Signal Process. Control. 2020, 62, 102070. [Google Scholar] [CrossRef]
  101. Zeng, W.; Liu, F.; Wang, Q.; Wang, Y.; Ma, L.; Zhang, Y. Parkinson’s disease classification using gait analysis via deterministic learning. Neurosci. Lett. 2016, 633, 268–278. [Google Scholar] [CrossRef]
  102. Alharthi, A.S.; Casson, A.J.; Ozanyan, K.B. Gait spatiotemporal signal analysis for Parkinson’s disease detection and severity rating. IEEE Sens. J. 2021, 21, 1838–1848. [Google Scholar] [CrossRef]
  103. Butt, A.H.; Cavallo, F.; Maremmani, C.; Rovini, E. Biomechanical parameters assessment for the classification of Parkinson disease using bidirectional long short-term memory. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 5761–5764. [Google Scholar] [CrossRef]
  104. Folador, J.P.; Santos, M.C.S.; Luiz, L.M.D.; De Souza, L.A.P.S.; Vieira, M.F.; Pereira, A.A.; Andrade, A.D.O. On the use of histograms of oriented gradients for tremor detection from sinusoidal and spiral handwritten drawings of people with Parkinson’s disease. Med. Biol. Eng. Comput. 2021, 59, 195–214. [Google Scholar] [CrossRef]
  105. Yang, T.-L.; Lin, C.-H.; Chen, W.-L.; Lin, H.-Y.; Su, C.-S.; Liang, C.-K. Hash transformation and machine learning-based decision-making classifier improved the accuracy rate of automated Parkinson’s disease screening. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 72–82. [Google Scholar] [CrossRef]
  106. Cantürk, İ. Fuzzy recurrence plot-based analysis of dynamic and static spiral tests of Parkinson’s disease patients. Neural Comput. Appl. 2021, 33, 349–360. [Google Scholar] [CrossRef]
  107. Gil-Martín, M.; Montero, J.M.; San-Segundo, R. Parkinson’s disease detection from drawing movements using convolutional neural networks. Electronics 2019, 8, 907. [Google Scholar] [CrossRef] [Green Version]
  108. Naseer, A.; Rani, M.; Naz, S.; Razzak, M.I.; Imran, M.; Xu, G. Refining Parkinson’s neurological disorder identification through deep transfer learning. Neural Comput. Appl. 2020, 32, 839–854. [Google Scholar] [CrossRef] [Green Version]
  109. Gazda, M.; Hires, M.; Drotar, P. Multiple-fine-tuned convolutional neural networks for Parkinson’s disease diagnosis from offline handwriting. IEEE Trans. Syst. Man Cybern. Syst. 2021, 1–12. [Google Scholar] [CrossRef]
  110. Pereira, C.R.; Pereira, D.R.; de Rosa, G.H.; Albuquerque, V.H.C.; Weber, S.A.; Hook, C.; Papa, J.P. Handwritten dynamics assessment through convolutional neural networks: An application to Parkinson’s disease identification. Artif. Intell. Med. 2018, 87, 67–77. [Google Scholar] [CrossRef] [Green Version]
  111. Afonso, L.C.; Rosa, G.H.; Pereira, C.R.; Weber, S.A.; Hook, C.; Albuquerque, V.H.C.; Papa, J.P. A recurrence plot-based approach for Parkinson’s disease identification. Futur. Gener. Comput. Syst. 2019, 94, 282–292. [Google Scholar] [CrossRef]
  112. Ribeiro, L.C.F.; Afonso, L.C.S.; Papa, J.P. Bag of samplings for computer-assisted Parkinson’s disease diagnosis based on recurrent neural networks. Comput. Biol. Med. 2019, 115, 103477. [Google Scholar] [CrossRef]
  113. Diaz, M.; Ferrer, M.A.; Impedovo, D.; Pirlo, G.; Vessio, G. Dynamically enhanced static handwriting representation for Parkinson’s disease detection. Pattern Recognit. Lett. 2019, 128, 204–210. [Google Scholar] [CrossRef]
  114. Diaz, M.; Moetesum, M.; Siddiqi, I.; Vessio, G. Sequence-based dynamic handwriting analysis for Parkinson’s disease detection with one-dimensional convolutions and BiGRUs. Expert Syst. Appl. 2021, 168, 114405. [Google Scholar] [CrossRef]
  115. Nõmm, S.; Zarembo, S.; Medijainen, K.; Taba, P.; Toomela, A. Deep CNN Based classification of the archimedes spiral drawing tests to support diagnostics of the Parkinson’s disease. IFAC Pap.Online 2020, 53, 260–264. [Google Scholar] [CrossRef]
  116. Peker, M.; Şen, B.; Delen, D. Computer-aided diagnosis of Parkinson’s disease using complex-valued neural networks and mRMR feature selection algorithm. J. Healthc. Eng. 2015, 6, 281–302. [Google Scholar] [CrossRef] [Green Version]
  117. Wodzinski, M.; Skalski, A.; Hemmerling, D.; Orozco-Arroyave, J.R.; Noth, E. Deep learning approach to Parkinson’s disease detection using voice recordings and convolutional neural network dedicated to image classification. In Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 717–720. [Google Scholar] [CrossRef]
  118. Avci, D.; Dogantekin, A. An expert diagnosis system for Parkinson disease based on genetic algorithm-wavelet kernel-extreme learning machine. Parkinson’s Dis. 2016, 1–9. [Google Scholar] [CrossRef] [Green Version]
  119. Gómez-Vilda, P.; Mekyska, J.; Ferrández, J.M.; Palacios-Alonso, D.; Gómez-Rodellar, A.; Rodellar-Biarge, V.; Galaz, Z.; Smekal, Z.; Eliasova, I.; Kostalova, M.; et al. Parkinson disease detection from speech articulation neuromechanics. Front. Neuroinform. 2017, 11, 56. [Google Scholar] [CrossRef] [Green Version]
  120. Xu, Z.-J.; Wang, R.-F.; Wang, J.; Yu, D.-H. Parkinson’s disease detection based on spectrogram-deep convolutional generative adversarial network sample augmentation. IEEE Access 2020, 8, 206888–206900. [Google Scholar] [CrossRef]
  121. Karaman, O.; Çakın, H.; Alhudhaif, A.; Polat, K. Robust automated Parkinson disease detection based on voice signals with transfer learning. Expert Syst. Appl. 2021, 178, 115013. [Google Scholar] [CrossRef]
  122. Åström, F.; Koker, R. A parallel neural network approach to prediction of Parkinson’s disease. Expert Syst. Appl. 2011, 38, 12470–12474. [Google Scholar] [CrossRef]
  123. Narendra, N.P.; Schuller, B.; Alku, P. The detection of Parkinson’s Disease from speech using voice source information. IEEE ACM Trans. Audio Speech Lang. Process. 2021, 29, 1925–1936. [Google Scholar] [CrossRef]
Figure 1. Basic architecture of ANN and DNN models.
Figure 1. Basic architecture of ANN and DNN models.
Sensors 21 07034 g001
Figure 2. Basic architecture of the CNN model.
Figure 2. Basic architecture of the CNN model.
Sensors 21 07034 g002
Figure 3. Basic architecture of the LSTM model.
Figure 3. Basic architecture of the LSTM model.
Sensors 21 07034 g003
Figure 4. Flow diagram of the PRISMA model in the article selection process to build the systematic review.
Figure 4. Flow diagram of the PRISMA model in the article selection process to build the systematic review.
Sensors 21 07034 g004
Figure 5. Stacked bar plot of the number of deep learning models proposed for each modality of brain analysis.
Figure 5. Stacked bar plot of the number of deep learning models proposed for each modality of brain analysis.
Sensors 21 07034 g005
Figure 6. Box and whiskers plot of the model accuracy of deep learning studies using various modalities of brain analysis.
Figure 6. Box and whiskers plot of the model accuracy of deep learning studies using various modalities of brain analysis.
Sensors 21 07034 g006
Figure 7. Bar plot representation of the model accuracy by various investigators for different modalities of brain analysis.
Figure 7. Bar plot representation of the model accuracy by various investigators for different modalities of brain analysis.
Sensors 21 07034 g007
Figure 8. Box and whiskers plot of the model accuracy of deep learning studies using various modalities of motor symptoms.
Figure 8. Box and whiskers plot of the model accuracy of deep learning studies using various modalities of motor symptoms.
Sensors 21 07034 g008
Figure 9. (a) Pie chart representation of various deep learning models proposed for gait analysis and (b) Bar chart representation of model accuracy for each deep learning study in gait analysis.
Figure 9. (a) Pie chart representation of various deep learning models proposed for gait analysis and (b) Bar chart representation of model accuracy for each deep learning study in gait analysis.
Sensors 21 07034 g009
Figure 10. (a) Pie chart representation of various deep learning models proposed for handwriting analysis and, (b) Bar chart representation of model accuracy for each deep learning study in handwriting analysis.
Figure 10. (a) Pie chart representation of various deep learning models proposed for handwriting analysis and, (b) Bar chart representation of model accuracy for each deep learning study in handwriting analysis.
Sensors 21 07034 g010
Figure 11. (a) Pie chart representation of various deep learning models proposed for speech analysis and, (b) bar chart representation of model accuracy for each deep learning study in speech analysis.
Figure 11. (a) Pie chart representation of various deep learning models proposed for speech analysis and, (b) bar chart representation of model accuracy for each deep learning study in speech analysis.
Sensors 21 07034 g011
Figure 12. Bar chart representation of the number of deep learning studies published between January 2020 and July 2021 for brain analysis and motor symptoms.
Figure 12. Bar chart representation of the number of deep learning studies published between January 2020 and July 2021 for brain analysis and motor symptoms.
Sensors 21 07034 g012
Figure 13. Bar chart representation of the average model accuracy from various deep learning studies obtained for each modality.
Figure 13. Bar chart representation of the average model accuracy from various deep learning studies obtained for each modality.
Sensors 21 07034 g013
Figure 14. Pie chart representation of various deep learning models proposed for automated PD detection studies in this review.
Figure 14. Pie chart representation of various deep learning models proposed for automated PD detection studies in this review.
Sensors 21 07034 g014
Figure 15. (a) configure a deep learning model that can perform the diagnosis (i.e., identification of the ailment) and seg-mentation (i.e., explanation, or detailed information) simultaneously; (b) perform diagnosis in the first stage, and in the second stage, segmentation is performed only on the input image or signal that had been diagnosed as PD in the first stage.
Figure 15. (a) configure a deep learning model that can perform the diagnosis (i.e., identification of the ailment) and seg-mentation (i.e., explanation, or detailed information) simultaneously; (b) perform diagnosis in the first stage, and in the second stage, segmentation is performed only on the input image or signal that had been diagnosed as PD in the first stage.
Sensors 21 07034 g015
Figure 16. Behavioral tradeoff matrix.
Figure 16. Behavioral tradeoff matrix.
Sensors 21 07034 g016
Figure 17. Block diagram of a Cloud-based system for PD diagnosis using various types of inputs from different modalities.
Figure 17. Block diagram of a Cloud-based system for PD diagnosis using various types of inputs from different modalities.
Sensors 21 07034 g017
Table 1. Summary of the Boolean search string across the respective journal article databases.
Table 1. Summary of the Boolean search string across the respective journal article databases.
Boolean Search String
Database[Title]AND [Title/Abstract]No. of Studies
PubMed“parkinson” AND “disease”“Neural network”178
“Deep learning”
Google Scholar“Prediction” OR “Diagnosis” OR “Detection”248
IEEE“Neural network”135
“Deep learning”
Science direct“Neural network”233
“Deep learning”
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Loh, H.W.; Hong, W.; Ooi, C.P.; Chakraborty, S.; Barua, P.D.; Deo, R.C.; Soar, J.; Palmer, E.E.; Acharya, U.R. Application of Deep Learning Models for Automated Identification of Parkinson’s Disease: A Review (2011–2021). Sensors 2021, 21, 7034. https://doi.org/10.3390/s21217034

AMA Style

Loh HW, Hong W, Ooi CP, Chakraborty S, Barua PD, Deo RC, Soar J, Palmer EE, Acharya UR. Application of Deep Learning Models for Automated Identification of Parkinson’s Disease: A Review (2011–2021). Sensors. 2021; 21(21):7034. https://doi.org/10.3390/s21217034

Chicago/Turabian Style

Loh, Hui Wen, Wanrong Hong, Chui Ping Ooi, Subrata Chakraborty, Prabal Datta Barua, Ravinesh C. Deo, Jeffrey Soar, Elizabeth E. Palmer, and U. Rajendra Acharya. 2021. "Application of Deep Learning Models for Automated Identification of Parkinson’s Disease: A Review (2011–2021)" Sensors 21, no. 21: 7034. https://doi.org/10.3390/s21217034

APA Style

Loh, H. W., Hong, W., Ooi, C. P., Chakraborty, S., Barua, P. D., Deo, R. C., Soar, J., Palmer, E. E., & Acharya, U. R. (2021). Application of Deep Learning Models for Automated Identification of Parkinson’s Disease: A Review (2011–2021). Sensors, 21(21), 7034. https://doi.org/10.3390/s21217034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop