- Research
- Open access
- Published:
Enhancing biofeedback-driven self-guided virtual reality exposure therapy through arousal detection from multimodal data using machine learning
Brain Informatics volume 10, Article number: 14 (2023)
Abstract
Virtual reality exposure therapy (VRET) is a novel intervention technique that allows individuals to experience anxiety-evoking stimuli in a safe environment, recognise specific triggers and gradually increase their exposure to perceived threats. Public-speaking anxiety (PSA) is a prevalent form of social anxiety, characterised by stressful arousal and anxiety generated when presenting to an audience. In self-guided VRET, participants can gradually increase their tolerance to exposure and reduce anxiety-induced arousal and PSA over time. However, creating such a VR environment and determining physiological indices of anxiety-induced arousal or distress is an open challenge. Environment modelling, character creation and animation, psychological state determination and the use of machine learning (ML) models for anxiety or stress detection are equally important, and multi-disciplinary expertise is required. In this work, we have explored a series of ML models with publicly available data sets (using electroencephalogram and heart rate variability) to predict arousal states. If we can detect anxiety-induced arousal, we can trigger calming activities to allow individuals to cope with and overcome distress. Here, we discuss the means of effective selection of ML models and parameters in arousal detection. We propose a pipeline to overcome the model selection problem with different parameter settings in the context of virtual reality exposure therapy. This pipeline can be extended to other domains of interest where arousal detection is crucial. Finally, we have implemented a biofeedback framework for VRET where we successfully provided feedback as a form of heart rate and brain laterality index from our acquired multimodal data for psychological intervention to overcome anxiety.
1 Introduction
Anxiety is an emotional state characterised by negative affect and worry, heightened arousal, careful environmental monitoring, rumination and avoidance behaviour, ranging from mild to severe. Intense states of anxiety, or even fear—a more rudimentary physiological response to a perceived threat that can lead to fight/flight/freeze reactions and panic behaviour—can be symptoms of different psychological disorders. For example, phobias are defined by an exaggerated fear or unrealistic sense of threat to a situation or object, which appear in many forms. In the Diagnostic and Statistical Manual of Mental Disorders (DSM-5, 2013) [1, 2], the American Psychiatric Association defines five types of phobia, related to natural environments (e.g., heights), animals (e.g., spiders), specific situations (e.g., public spaces), blood/injury or medical issues, and other types (e.g., loud noise, vomiting, choking). These debilitating disorders affect about 13% of the world’s total population. Research is ongoing for contributing factors to the onset, development, and maintenance of phobias and anxiety-related disorders, their underlying cognitive and behavioural processes, physical manifestation, and treatment methods [3]. Traditional treatments of such disorders include in vivo exposure, interoceptive exposure, cognitive behavioural therapy (CBT), applied muscle tension, supportive psychotherapy, hypnotherapy, and medications such as beta-blockers or sedatives [4].
Virtual reality exposure therapy (VRET) is one of the most promising novel treatments, enabled by its superior immersive capabilities that generate a greater sense of presence and enhance user effects, especially for negatively valenced, high arousal stimuli [5]. Over the last two decades VRET, encompassing psychological treatment principles and enabled by advancing display and computing technology developments, has become a popular digital intervention for various psychological disorders [6, 7], being as effective as in vivo (i.e., face-to-face) exposure therapy post-intervention [8]. For example, a meta-analysis showed VRET for Social Anxiety Disorder (encompassing an exaggerated fear of being rejected, negatively evaluated or humiliated during social interactions, observations and/or in performance situations) to be more effective than wait-list controls (with large effect sizes), and even therapist-led in vivo exposure therapy (though only small effect size) [6]. It shows good acceptability in users due to its safe, controlled and empowering means of exposure. The state-of-the-art development clearly lacks one key development; there is no attempt of real-time biofeedback for VRET intervention. A vital part of our development of VRET is the integration of bio-signals, such as heart rate variability or cortical arousal, to assess and ameliorate physiological distress states (e.g., fear or anxiety-induced arousal) during exposure. Here, the correct detection of physiological states through robust models for the effective management of anxiety-induced arousal or stress is pivotal to facilitating intervention and enhancing psychological health and well-being. However, a reliable and automated system is needed to accomplish this task. Given that artificial intelligence (AI) and machine learning (ML) have been playing significant roles in the methodological developments for diverse problem domains, including computational biology [9, 10], cyber security [11,12,13,14], disease detection [15,16,17,18,19,20,21] and management [22,23,24,25,26,27], elderly care [28, 29], epidemiological study [30], fighting pandemic [31,32,33,34,35,36,37], healthcare [38,39,40,41,42], healthcare service delivery [43,44,45], natural language processing [46,47,48,49,50], social inclusion [51,52,53] and many more, the AI and ML-based methods can be employed to do this task. Hence, here we have explored a series of ML models with publicly available data sets (using electroencephalogram and heart rate variability) to predict arousal states. If we can detect anxiety-induced arousal, we can trigger calming activities to allow individuals to cope with and overcome distress. Here, we discuss the means of effective selection of ML models and parameters in arousal detection. We have presented our first abstract concept ML Driven Self-guided Virtual Reality Exposure Therapy Based on Arousal State Detection from Multimodal Data in [54]. Then we started implementation, and here in this paper, we have added the concept of Biofeedback as a form of variation of heart rate and laterality index using EEG data and synthesised heart rate collected by emotive EPOC flex [55].
2 Related work
Arousal detection, a noninvasive intervention, requires a multi-disciplinary approach, where psychological state determination, ML models for arousal or stress detection, and exploration of the related domains for model implementation are equally important. In this paper, we narrow down the areas and present an overview of the state-of-the-art scenarios.
2.1 Emotion/stress detection
Koelstra et al. [56] presented a multimodal dataset for the analysis of human affective states. They collected physiological signals, including electroencephalographic (EEG) data from participants watching music videos and rated each video in terms of excitement, stress, arousal, flaws, valence, like, dislike. The data has been widely used for developing various ML models for arousal, anxiety and stress detection. Ahuja and Banga [57] created another dataset where they classified mental stress in 206 students. They used linear regression (LR), support vector machine (SVM), Naïve Bayes (NB) and random forest (RF) ML classification algorithms [9, 30, 38, 41, 49, 51, 58,59,60] to determine mental stress. Using SVM and tenfold cross-validation, they claimed an 85.71% accuracy. Ghaderi et al. [61] used respiration, galvanic skin response (GSR) from hand and foot, heart rate (HR) and electromyography (EMG) at different time intervals to examine different stress levels. Then they used k-nearest neighbour (k-NN) and the SVM ML model for stress detection [61].
2.2 Emotion/stress detection using EEG
EEG is a noninvasive way to measure electrical responses generated by the outer layers of the cortex, primarily pyramidal cells. It has been used to investigate neural activity during arousal, stress, depression, anxiety or various other emotions. Several studies have applied ML methods to classify and/or predict emotional brain states based on EEG activity [72, 73]. For example, Chen et al. [74] designed a neural feedback system to predict and classify anxiety states using EEG signals during the resting state from 34 subjects. Anxiety was calculated using power spectral density (PSD), and then SVM was used to classify anxious and non-anxious states. Shon et al. [67] integrated genetic algorithm (GA)-based features in the ML pipeline along with a k-NN classifier to detect stress in EEG signals. The model was evaluated using DEAP data set [56] for the identification of emotional stress state. Other work also used the publicly available DEAP data set for emotion recognition in virtual environments [68]. Based on Russell’s circumplex model, statistical features, high order crossing (HOC) features and powerbands were extracted from the EEG signals, and affective state classification was performed using SVM and RF. In major depressive disorder (MDD, n = 32), Duan et al. [69] extracted interhemispheric asymmetry and cross-correlation features from EEG signals and combined these in a classification using k-NN, SVM and convolutional neural networks (CNN). Similarly, in other research by Omar [70], frontal lobe EEG data were used to identify stressed patients. Fast Fourier transformation (FFT) was applied to extract features from the signal, which were then passed to ML models, such as SVM and NB, for subject-wise classification of control and stress groups. Table 1 shows a summary of ML models used for arousal detection and their performance.
2.3 Machine learning and VRET
Balan et al. [3] used the publicly available DEAP [56] database and applied various ML algorithms for classifying the six basic emotions joy, anger, sadness, disgust, surprise and fear, based on the physiological data. They presented the stages of model development and its evaluation in a virtual environment with gradual stimulus exposure for acrophobia treatment, accompanied by physiological signals monitoring. In [62], authors used a hybrid ML technique using k-Means++ clustering algorithm and principal component analysis (PCA) to cluster drug addicts to find out the relationship between cardiac physiological characteristic data and treatment effect. The author showed the relationship between cardiac physiological characteristics and treatment effects using virtual reality. Other research [64] used a single session VRET for patients with spider phobia, including clinical, neuroimaging (functional magnetic resonance imaging, fMRI), and genetic data for baseline and post-treatment (after 6 months) analysis. They claimed a 30% reduction in spider phobia, assessed psychometrically, and a 50% reduction in individual distance avoidance tests using behavioural patterns. From these literature reviews, we systematically picked the widely used ML algorithms to develop our ML pipeline. In Fig. 1, we showed the the performance (accuracy, precision, recall and F1-Score) of the publicly available data set that we used to train our model. based on our careful existing literature review we considered Gaussian Naïve Bayes (GNB), quadratic discriminant analysis (QDA), support vector machine (SVM), multilayer perceptron (MLP), AdaBoost (ADB), k-nearhood neighbour (KNN), decision tree (DT) and random forest (RF) ML models with multiple parameter settings.
3 ML model pipeline and data set
First, we collected EEG and multimodal physiological data from suitable sensors. Then we cleaned the data for further processing. Here we used individual phases of feature selection, feature prepossessing and feature constructions for model selection used for parameter optimisation. This process was repeated using automated ML for the best possible outcome from the collected data set. After the model validation, we apply our trained model to VRET and/or other domains where arousal detection is crucial. Figure 2 shows the proposed ML pipeline.
3.1 Feature extraction for real-time data analysis
Different feature for real-time data analysis has been extracted form [71, 75,76,77]. In the domain of ML selection of useful features from data to identify stress levels is crucial. A better selection of features can improve the efficacy of the classification algorithm with a reduced computational cost. For the case of EEG signals, we can consider a large number of features both in frequency domains and in time. However, learning the possible combination of subsets and comparing their performance requires extra computational burden.
If we record our EEG signal with 128 Hz, calculating any feature over one single EEG reading is not informative enough, as 128 data points per second will be massive. This issue can be overcome by introducing the concept of a window, which is a continuous block of readings. Different studies claimed that a window size between 3 to 12 s is an adequate window size while classifying mental status using EEG signals. A sliding window approach is another alternative. However, research shows that with an added cost of computation burden. Here in our experiment, we have used a fixed size window of 5 s with 128 Hz of sampling frequency. Figure 3 shows data acquisition using emotive EPOC flex. The figure on the left shows the top view, the figure in the middle shows a side view of emotive EPOC flex, and the figure on the right shows the data acquisition phase using emotive EPOC flex and Oculus Quest 2 head-mounted displays.
The mean of the raw signal [75]:
where \(X\left( n \right)\) represents the value of the \(n{\mathrm{th}}\) sample of the raw EEG signal, \(n = 1,\ldots N.\) The standard deviation of the raw signal:
The mean of the absolute values of the first differences of the raw signal:
The mean of the absolute values of the second differences of the raw signal:
The means of the absolute values of the first differences of the normalised signals:
where \(\tilde{X}\left( n \right) = \frac{X\left( n \right) -\mu _X}{\sigma _X}\), \({\mu _X}\) and \({\sigma _X}\) are the means and standard deviations of X.
The means of the absolute values of the second difference of the normalised signals:
Time and frequency domain features, extracted from EEG signals.
The maximum amplitude of channel j up to sample i (cumulative maximum):
Minimum amplitude of channel j up to sample i (cumulative minimum):
The average absolute value of amplitude among different channels (mean value):
Median of the signal among different EEG channels (median value):
Minimum amplitude among different channels (smallest window elements):
Median of the signal of channel j in a window with size k samples (moving median with window size k):
Difference between maximum and minimum of the EEG signals amplitude among different EEG channels (maximum-to-minimum difference):
Norm 2 of the EEG signals divided by the square root of the number of samples among different EEG channels (root-mean-square level):
Maximum of the EEG signal amplitude divided by the \(\text{RMS}_j\) (peak-magnitude-to-RMS ratio):
Norm of the EEG signals among different channels in each window (root-sum-of-squares level):
Deviation of EEG signals among different channels in each window (standard deviation):
The variance of the signal EEG amplitude among different channels (variance):
The maximum value of EEG amplitude among different channels in the time domain (peak):
Location of maximum EEG amplitude among channels (peak location):
The time between EEG signal peaks between the various windows (peak to peak):
Shows the sharpness of EEG signals peak (kurtosis):
Power of the EEG signal in channel j in the frequency domain in the interval [8 Hz, 15 Hz] (Alpha mean power):
Power of the signal in Beta interval (Beta mean power):
Power of the signal in Delta interval (Delta mean power):
Power of the signal in Theta interval (Theta mean power):
Level of happiness (valence) [71]:
We have used the sampling frequency of the signal to 128 Hz. If we want to calculate the features on one individual EEG reading then may not be much informative, due to a large number of data points. To overcome this problem, we have used blocks of continuous readings which are also termed windows. We extracted our features from these windows. Previous studies show that the window size between 3 to 12Â s is an effective window length while classifying the mental status from EEG signals [71].
Level of excitement (arousal) [71]:
Half of the signal power of channel j is distributed in the frequencies less than \(\text{MEDF}_j\) (median frequency):
If arousal is less than 4 and valence is between 4 and 6, as in the following equation, it is defined as calm [77]:
where arousal stands for a range from calm to excited, while valence presents a range from unpleasant to pleasant. If arousal exceeds 5 and valence is less than 3, as in the following equation, it is defined as a stress state [77]:
The frequency range are [78]:
-
\(\delta : 0.5-4 \, \text{hertz};\)
-
\(\theta : 4-8 \, \text{hertz};\)
-
\(\alpha : 8-12 \, \text{hertz};\)
-
\(\beta : 12-30 \, \text{hertz};\)
-
\(\gamma : > 30 \, \text{hertz}.\)
3.2 Data set
In the first stage, we explored three publicly available data sets. The first one is the SWELL data set of [80]. The authors calculated the inter-beat interval (IBI) between peaks in electrocardiographic (ECG) signals. Then, the heart rate variability (HRV) index was computed on a 5 min IBI array by appending the new IBI sample to the array in a repeated manner. The data set was manually annotated with the conditions under which the data were collected. This data set has 204,885 samples with 75 features and 3 labelled classes. Here, 25 people performed regular cognitive activities, including reading e-mails, writing reports, searching, and making presentations under manipulated working conditions. We used a second publicly available data set of [81], which was initially inspired from [82], with HRV data to train our proposed ML model and determine arousal levels.
We also used a third publicly available data set titled ‘EEG during Mental Arithmetic Task Performance’ [79] to explore EEG recordings of 36 participants during resting state and while doing an arithmetic task. The dataset was collected using a Neurocom monopolar EEG 23-channel system device. Electrodes (Fp1, Fp2, F3, F4, Fz, F7, F8, C3, C4, Cz, P3, P4, Pz, O1, O2, T3, T4, T5, T6) were placed on the scalp using international 10/20 standard. The sampling rate for each channel was 500 Hz with a high-pass filter of 0.5 Hz and a low-pass filter of 45 Hz cut-off frequency. In the experimental manipulation, participants were asked to solve mental arithmetic questions to increase cognitive load and induce stress, thus, evoking higher arousal states.
4 Result analysis
In this study, we took the dataset of EEG signals during mental arithmetic tasksFootnote 1 [79]. Decomposed EEG signals for a duration of 5 s before and during an arithmetic task are shown in Fig. 4. The signals were in edf format, which is converted to epochs and their statistical features (mean, std, ptp, var, minim, maxim, argminim, argmaxim, skewness and kurtosis) were calculated. These were then used for the classification of the signals. RF model was used for this purpose which gave an accuracy of 87.5%.
Figure 4 shows the time-domain representation of EEG signal of [79]. In this figure, plots on the left show recordings during the initial condition and plots on the right during the stressed condition in channels F3, F4, Fz, and Cz. We can clearly see the increase of oscillatory patterns of the signal from initial to stressful conditions.
Figure 5 shows average frequency content of signal epochs before and during solving arithmetic tasks using [79] data set. We can see some changes in excitation levels. The figures on the left show the signal in a relaxed state, whereas the figures on the right depict the signals under stress while performing mental arithmetic tasks. Similarly, subsequent images in Fig. 5 show the time–frequency analysis of individual channels (F3, Cz, and P4) generated using power plots and topographic maps. A significant difference can be seen between plots before and during evoked stress states (Fig. 6). Figure 7 shows the pair plot of a few notable features MEAN-RR, MEDIAN-RR, SDRR-RMSSD, MEDIAN-REL-RR, SDRR-RMSSD-REL-RR, VLF, VLF-PCT from SWELL dataset [80]. These statistical features have been used to classify the signals aiming for arousal detection. This publicly available HRV dataset has been used to train our ML models. Figure 8 shows the prediction of stressful moments from the HRV data set generated by [81] inspired from [82]. We used the publicly available data set of [81] to train our proposed ML model and determine momentary stressful states. Figure 9 shows the performance (accuracy, precision, recall and F1-Score) of the publicly available data set that we have used to train our model. Here we consider Gaussian Naïve Bayes (GNB), quadratic discriminant analysis (QDA), support vector machine (SVM), multilayer perceptron (MLP), AdaBoost (ADB), k-nearhood neighbour (KNN), decision tree (DT) and random forest (RF) ML models. KNN, DT and RF have been used with multiple parameter settings. The figure on the top shows the performance of the SWELL [80] data set and the figure on the bottom shows the performance on the EEG data set of [79].
5 Biofeedback for VRET
As the Related work Sect. 2 indicates, the state-of-the-art development clearly lacks one key direction; there is no attempt at real-time biofeedback for VRET intervention. Here in this research, a vital part of our development of VRET is the integration of bio-signals, such as heart rate, heart rate variability or cortical arousal, to assess and ameliorate physiological distress states (e.g., fear or anxiety-induced arousal) during exposure. We have created a VR environment and a mechanism to provide biofeedback during the VRET session. We acquired the cortical arousal using an emotive EPOC flex. After a near real-time processing of the EEG signals (as we considered a window approach, there was a constant delay equivalent to window length as shown in Fig. 9 plus an insignificant variable delay for signal processing time). To reduce the interference, we had to target to minimise the use of the number of sensors. We planned to use heart rate, so it was challenging to calculate heart rate using emotive EPOC flex. In the Fig. 9, we can see an emotive EPOC flex with its adjustable 10–20 diagram. The bottom segment shows a sample signal collected using its different electrodes. The red rectangular box shows a window of 5 s from where data were collected with a 128-Hz sampling frequency. We used electrodes FT9 and FT10 to determine our heart rate. We placed the probe across the neck. For the acquired raw signal, first, we performed the baseline correction and then filtered the data. Afterwards, we calculated the bipolar difference to determine the heart rate. On the other side, we used 5-s window for our EEG data acquisition. Then we systematically did the baseline correction, filtered the data and used electrodes F3, F4, AF3 and AF4 to calculate the literality index. Then we used calculated heart rate and literality index as forms of biofeedback. Figure 10 shows the block diagram of the feedback generation process. During the heart rate calculation from EEG data, we used electrodes FT9 and FT10 to determine our heart rate. We placed the probe across the neck. For the acquired raw signal first, we performed the baseline correction and then filtered the data. Afterwards, we calculated the bipolar difference to determine the heart rate. Figure 11 shows the time-domain representation of the signals at their different stages of processing. From Fig. 12, we have determined the peaks to calculate the heart rate where we had to reject the false one systematically. Figure 13 shows a few snapshots of the virtual environment where biofeedback has been used. In the environment, we can see the image of the heart and brain with different colours and shapes. The size and the colours of the heart and brain were mapped with the level of arousal. A small pink heart represents a normal condition. However, as the heart rate increases, its colour and size also change in the VR environment. The colour and size of the brain are related to the laterality index.
We believe we have invented the wheel here, and there was no previous wheel to compare. Biofeedback-based intervention for VRET is a novel invention. Earlier, there was no existing literature or published work of biofeedback for VRET to compare our results. We have a future plan to recreate the experiment with and without biofeedback and compare the results. We also have the plan to deploy our proposed machine-learning framework for VRET with biofeedback and compare the results. Yet, we have to keep it in mind that for the same ML algorithm with a fixed parameter settings, if we use a different set of data then the results may vary slightly as showed by [48].
6 Challenges and future research directions
As we mentioned in the Related work section (Sect. 2), this work is derived through multi-disciplinary research. So, diverse open challenges have been identified. Some of the key issues are:
-
The real-time analysis of the ML data. Stream processing will be one of the next challenges that we want to overcome for the same problem.
-
One VRET session for a specific kind of anxiety might be very different from another VRET session with a different kind of anxiety or disorder. For a validation check, a comparison of a development with a new idea and its implementation to an existing work might be very challenging.
-
The placement of the BCI electrodes is an important consideration, and interesting to investigate further to determine the most relevant regions of the brain to monitor arousal.
-
To provide biofeedback for the VRET, haptic feedback could be used. It is yet to explore how real-time biofeedback can be provided. We need to investigate that at incorporate.
-
In future, additional sensor/polar devices, chest straps and/or wristbands could be used to collect further types of signals. Moreover, additional data should be collected from different experimental conditions to further improve efficacy.
7 Conclusion
In self-guided VRET, participants can gradually increase their exposure to anxiety-evoking stimuli (like audience size, audience reaction, the salience of self, etc.) to desensitise and reduce momentary anxiety and arousal states, facilitating amelioration of PSA over time. However, creating this VR environment and determining anxiety-induced arousal or momentary stress states is an open challenge. In this work, we showed which selection of parameters and ML models can facilitate arousal detection. As such, we propose a ML pipeline for effective arousal detection. We trained our model with three publicly available data sets where we particularly focused on EEG and HRV data. Considering the scenarios, our proposed automated ML pipeline will overcome the model selection problem for arousal detection. Our trained ML model can be used for further development in VRET to overcome psychological distress in anxiety and fear-related disorders. As the first phase of work, we have implemented a biofeedback framework for VRET where we successfully provided feedback as a form of heart rate and brain laterality index from our acquired multimodal data for psychological intervention to overcome anxiety. Further useful applications of the model can be seen in meltdown moment detection in autism spectrum disorder (ASD) and other scenarios where stress and arousal play a significant role and early intervention will be helpful for physiological amelioration. For example, early identification and signalling of a meltdown moment can facilitate the initiation of targeted interventions preventing meltdowns, which will help parents, carers and supporting staff deal with such occurrences and reduce distress and harm in individuals with ASD. Finally, arousal and increasing stress have become buzzwords of recent times, adversely affecting a vast range of populations across the globe regardless of age group, ethnicity, gender, or work profile. Due to the long ongoing COVID-19 pandemic, changing scenarios, work patterns and lifestyles, increasing pressures, and technological advancements are a few possible reasons for this trend [56, 61, 81, 84]. Thus, accurate detection of distress-related arousal levels across the general population (e.g., in educational settings or the workplace) may help to avoid associated adverse impacts through effective interventions, prevent long-term mental health issues and improve overall well-being.
References
LeBeau RT, Glenn D, Liao B, Wittchen H-U, Beesdo-Baum K, Ollendick T, Craske MG (2010) Specific phobia: a review of DSM-IV specific phobia and preliminary recommendations for DSM-V. Depress Anxiety 27(2):148–167. https://doi.org/10.1002/da.20655
Grzadzinski R, Huerta M, Lord C (2013) Dsm-5 and autism spectrum disorders (ASDS): an opportunity for identifying ASD subtypes. Mol Autism 4(1):1–6
Bălan O, Moldoveanu A, Leordeanu M (2021) A machine learning approach to automatic phobia therapy with virtual reality. In: Opris I, Lebedev AM, Casanova FM (eds) Modern approaches to augmentation of brain function. Contemporary clinical neuroscience. Springer, Cham, pp 607–636. https://doi.org/10.1007/978-3-030-54564-2_27
Choy Y, Fyer AJ, Lipsitz JD (2007) Treatment of specific phobia in adults. Clin Psychol Rev 27(3):266–286. https://doi.org/10.1016/j.cpr.2006.10.002
Standen B, Anderson J, Sumich A, Heym N (2021) Effects of system- and media-driven immersive capabilities on presence and affective experience. Virtual Real. https://doi.org/10.1007/s10055-021-00579-2
Carl E, Stein AT, Levihn-Coon A, Pogue JR, Rothbaum B, Emmelkamp P, Asmundson GJ, Carlbring P, Powers MB (2019) Virtual reality exposure therapy for anxiety and related disorders: a meta-analysis of randomized controlled trials. J Anxiety Disord 61:27–36
Valmaggia LR, Latif L, Kempton MJ, Rus-Calafell M (2016) Virtual reality in the psychological treatment for mental health problems: an systematic review of recent evidence. Psychiatry Res 236:189–195
Horigome T, Kurokawa S, Sawada K, Kudo S, Shiga K, Mimura M, Kishimoto T (2020) Virtual reality exposure therapy for social anxiety disorder: a systematic review and meta-analysis. Psychol Med 50(15):2487–2497
Rahman MA (2018) Gaussian process in computational biology: covariance functions for transcriptomics. Ph.D, University of Sheffield (February 2018). https://etheses.whiterose.ac.uk/19460/. Accessed 11 Feb 2022
Rakib AB, Rumky EA, Ashraf AJ, Hillas MM, Rahman MA (2021) Mental healthcare chatbot using sequence-to-sequence learning and bilstm. In: Mahmud M, Kaiser MS, Vassanelli S, Dai Q, Zhong N (eds) Brain informatics. Springer, Cham, pp 378–387
Islam N et al (2021) Towards machine learning based intrusion detection in IOT networks. Comput Mater Contin 69(2):1801–1821
Farhin F, Kaiser MS, Mahmud M (2021) Secured smart healthcare system: blockchain and bayesian inference based approach. In: Proceedings of TCCE, pp 455–465
Ahmed S, et al (2021) Artificial intelligence and machine learning for ensuring security in smart cities. In: Data-driven mining, learning and analytics for secured smart cities, pp 23–47
Zaman S et al (2021) Security threats and artificial intelligence based countermeasures for internet of things networks: a comprehensive survey. IEEE Access 9:94668–94690
Noor MBT, Zenia NZ, Kaiser MS, Mamun SA, Mahmud M (2020) Application of deep learning in detecting neurological disorders from magnetic resonance images: a survey on the detection of alzheimer’s disease, parkinson’s disease and schizophrenia. Brain Inform 7(1):1–21
Ghosh T, Al Banna MH, Rahman MS, Kaiser MS, Mahmud M, Hosen AS, Cho GH (2021) Artificial intelligence and internet of things in screening and management of autism spectrum disorder. Sustain Cities Soc 74:103189
Biswas M, Kaiser MS, Mahmud M, Al Mamun S, Hossain M, Rahman MA, et al (2021) An xai based autism detection: the context behind the detection. In: Proceedings of brain informatics, pp 448–459
Wadhera T, Mahmud M (2022) Computing hierarchical complexity of the brain from electroencephalogram signals: a graph convolutional network-based approach. In: Proceedings of IJCNN, pp 1–6
Wadhera T, Mahmud M (2022) Influences of social learning in individual perception and decision making in people with autism: a computational approach. In: Proceedings of brain informatics, pp 50–61
Wadhera T, Mahmud M (2022) Brain networks in autism spectrum disorder, epilepsy and their relationship: a machine learning approach. In: Artificial intelligence in healthcare: recent applications and developments, pp 125–142
Wadhera T, Mahmud M (2023) Brain functional network topology in autism spectrum disorder: a novel weighted hierarchical complexity metric for electroencephalogram. IEEE J Biomed Health Inform 27:1718–1725
Sumi AI, et al (2018) fassert: a fuzzy assistive system for children with autism using internet of things. In: Proceedings of brain informatics, pp 403–412
Akhund NU, et al (2018) Adeptness: alzheimer’s disease patient management system using pervasive sensors-early prototype and preliminary results. In: Proceedings of brain informatics, pp 413–422
Al Banna M, Ghosh T, Taher KA, Kaiser MS, Mahmud M, et al (2020) A monitoring system for patients of autism spectrum disorder using artificial intelligence. In: Proceedings of brain informatics, pp 251–262
Jesmin S, Kaiser MS, Mahmud M (2020) Artificial and internet of healthcare things based alzheimer care during covid 19. In: Proceedings of brain informatics, pp 263–274
Ahmed S, Hossain M, Nur SB, Shamim Kaiser M, Mahmud M, et al (2022) Toward machine learning-based psychological assessment of autism spectrum disorders in school and community. In: Proceedings of TEHI, pp 139–149
Mahmud M, et al (2022) Towards explainable and privacy-preserving artificial intelligence for personalisation in autism spectrum disorder. In: Proceedings of HCII, pp 356–370
Nahiduzzaman M, Tasnim M, Newaz NT, Kaiser MS, Mahmud M (2020) Machine learning based early fall detection for elderly people with neurological disorder using multimodal data fusion. In: Mahmud M, Vassanelli S, Kaiser MS, Zhong N (eds) Brain informatics, vol 12241 LNAI, pp 204–214
Biswas M, et al (2021) Indoor navigation support system for patients with neurodegenerative diseases. In: Proceedings of brain informatics, pp 411–422
Sadik R, Reza ML, Al Noman A, Al Mamun S, Kaiser MS, Rahman MA (2020) Covid-19 pandemic: a comparative prediction using machine learning. Int J Autom Artif Intell Mach Learn 1(1):1–16
Mahmud M, Kaiser MS (2021) Machine learning in fighting pandemics: a covid-19 case study. In: COVID-19: prediction, decision-making, and its impacts, pp 77–81
Kumar S et al (2021) Forecasting major impacts of covid-19 pandemic on country-driven sectors: challenges, lessons, and future roadmap. Pers Ubiquitous Comput. https://doi.org/10.1007/s00779-021-01530-7
Bhapkar HR, et al (2021) Rough sets in covid-19 to predict symptomatic cases. In: COVID-19: prediction, decision-making, and its impacts, pp 57–68
Satu MS et al (2021) Short-term prediction of covid-19 cases using machine learning models. Appl Sci 11(9):4266
Prakash N et al (2021) Deep transfer learning for covid-19 detection and infection localization with superpixel based segmentation. Sustain Cities Soc 75:103252
AlArjani A et al (2022) Application of mathematical modeling in prediction of covid-19 transmission dynamics. Arab J Sci Eng 47:10163–10186
Paul A et al (2022) Inverted bell-curve-based ensemble of deep learning models for detection of covid-19 from chest x-rays. Neural Comput Appl. https://doi.org/10.1007/s00521-021-06737-6
Mahmud M, Kaiser MS, Rahman MM, Rahman MA, Shabut A, Al-Mamun S, Hussain A (2018) A brain-inspired trust management model to assure security in a cloud based IOT framework for neuroscience applications. Cogn Comput 10(5):864–873
Mahmud M, Kaiser MS, Hussain A, Vassanelli S (2018) Applications of deep learning and reinforcement learning to biological data. IEEE Trans Neural Netw Learn Syst 29(6):2063–2079
Mahmud M, Kaiser MS, McGinnity TM, Hussain A (2021) Deep learning in mining biological data. Cogn Comput 13(1):1–33
Nasrin F, Ahmed NI, Rahman MA (2021) Auditory attention state decoding for the quiet and hypothetical environment: a comparison between bLSTM and SVM. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing, Springer, Singapore. pp 291–301. https://doi.org/10.1007/978-981-33-4673-4_23
Rahman MA, Brown DJ, Mahmud M, Shopland N, Haym N, Sumich A, Turabee ZB, Standen B, Downes D, Xing Y, et al (2022) Biofeedback towards machine learning driven self-guided virtual reality exposure therapy based on arousal state detection from multimodal data
Farhin F, Kaiser MS, Mahmud M (2020) Towards secured service provisioning for the internet of healthcare things. In: Proceedings of AICT, pp 1–6
Kaiser MS, et al (2021) 6g access network for intelligent internet of healthcare things: opportunity, challenges, and research directions. In: Proceedings of TCCE, pp 317–328
Biswas M et al (2021) Accu3rate: a mobile health application rating scale based on user reviews. PLoS ONE 16(12):0258050
Rabby G et al (2018) A flexible keyphrase extraction technique for academic literature. Procedia Comput Sci 135:553–563
Rabby G, Azad S, Mahmud M, Zamli KZ, Rahman MM (2020) TeKET: a tree-based unsupervised keyphrase extraction technique. Cogn Comput. https://doi.org/10.1007/s12559-019-09706-3
Adiba FI, Islam T, Kaiser MS, Mahmud M, Rahman MA (2020) Effect of corpora on classification of fake news using Naive Bayes Classifier. Int J Autom Artif Intell Mach Learn 1(1):80–92
Das S, Yasmin MR, Arefin M, Taher KA, Uddin MN, Rahman MA (2021) Mixed Bangla-English spoken digit classification using convolutional neural network. In: Mahmud M, Kaiser MS, Kasabov N, Iftekharuddin K, Zhong N (eds) Applied intelligence and informatics communications in computer and information science. Springer, Cham, pp 371–383. https://doi.org/10.1007/978-3-030-82269-9_29
Nawar A, Toma NT, Al Mamun S, Kaiser MS, Mahmud M, Rahman MA (2021) Cross-content recommendation between movie and book using machine learning. In: 2021 IEEE 15th international conference on application of information and communication technologies (AICT), pp 1–6. https://doi.org/10.1109/AICT52784.2021.9620432
Rahman MA, Brown DJ, Shopland N, Burton A, Mahmud M (2022) Explainable multimodal machine learning for engagement analysis by continuous performance test. In: Antona M, Stephanidis C (eds) Universal access in human-computer interaction. User and context diversity. Lecture notes in computer science. Springer, Cham, pp 386–399. https://doi.org/10.1007/978-3-031-05039-8_28
Rahman MA, Brown DJ, Shopland N, Harris MC, Turabee ZB, Heym N, Sumich A, Standen B, Downes D, Xing Y, Thomas C, Haddick S, Premkumar P, Nastase S, Burton A, Lewis J, Mahmud M (2022) Towards machine learning driven self-guided virtual reality exposure therapy based on arousal state detection from multimodal data. In: Mahmud M, He J, Vassanelli S, van Zundert A, Zhong N (eds) Brain informatics. Springer, Cham, pp 195–209
Mahmud M, Kaiser MS, Rahman MA (2022) Towards explainable and privacy-preserving artificial intelligence for personalisation in autism spectrum disorder. In: Antona M, Stephanidis C (eds) Universal access in human-computer interaction. User and context diversity. Lecture notes in computer science. Springer, Cham, pp 356–370. https://doi.org/10.1007/978-3-031-05039-8_26
Rahman MA, et al (2022) Towards machine learning driven self-guided virtual reality exposure therapy based on arousal state detection from multimodal data. In: Proceedings of brain informatics, pp 195–209
Emotive Epoc Flex. https://www.emotiv.com/epoc-flex/. Accessed 31 Dec 2022
Koelstra S, Muhl C, Soleymani M, Lee Jong-Seok, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I (2012) DEAP: a database for emotion analysis; Using physiological signals. IEEE Trans Affect Comput 3(1):18–31. https://doi.org/10.1109/T-AFFC.2011.15
Ahuja R, Banga A (2019) Mental stress detection in university students using machine learning algorithms. Procedia Comput Sci 152:349–353. https://doi.org/10.1016/j.procs.2019.05.007
Das TR, Hasan S, Sarwar SM, Das JK, Rahman MA (2021) Facial spoof detection using support vector machine. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing. Springer, Singapore, pp 615–625. https://doi.org/10.1007/978-981-33-4673-4_50
Ferdous H, Siraj T, Setu SJ, Anwar MM, Rahman MA (2021) Machine learning approach towards satellite image classification. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing. Springer, Singapore, pp 627–637. https://doi.org/10.1007/978-981-33-4673-4_51
Biswas M, Kaiser MS, Mahmud M, Al Mamun S, Hossain MS, Rahman MA (2021) An XAI based autism detection: the context behind the detection. In: Mahmud M, Kaiser MS, Vassanelli S, Dai Q, Zhong N (eds) Brain informatics. Lecture notes in computer science. Springer, Cham, pp 448–459. https://doi.org/10.1007/978-3-030-86993-9_40
Ghaderi A, Frounchi J, Farnam A (2015) Machine learning-based signal processing using physiological signals for stress detection. In: 2015 22nd Iranian conference on biomedical engineering (ICBME), pp 93–98. https://doi.org/10.1109/ICBME.2015.7404123
Yuan Y, Huang J, Yan K (2019) Virtual reality therapy and machine learning techniques in drug addiction treatment. In: 2019 10th international conference on information technology in medicine and education (ITME), pp 241–245. https://doi.org/10.1109/ITME.2019.00062
Leehr EJ, Roesmann K, Bohnlein J, Dannlowski U, Gathmann B, Herrmann MJ, Junghofer M, Schwarzmeier H, Seeger FR, Siminski N, Straube T, Lueken U, Hilbert K (2021) Clinical predictors of treatment response towards exposure therapy in virtuo in spider phobia: a machine learning and external cross-validation approach. J Anxiety Disord. https://doi.org/10.1016/j.janxdis.2021.102448
Schwarzmeier H, Leehr EJ, Bohnlein J, Seeger FR, Roesmann K, Gathmann B, Herrmann MJ, Siminski N, Junghofer M, Straube T, Grotegerd D, Dannlowski U (2020) Theranostic markers for personalized therapy of spider phobia: methods of a bicentric external cross-validation machine learning approach. Int J Methods Psychiatr Res 29(2):1812. https://doi.org/10.1002/mpr.1812
Premkumar P, Heym N, Brown DJ, Battersby S, Sumich A, Huntington B, Daly R, Zysk E (2021) The effectiveness of self-guided virtual-reality exposure therapy for public-speaking anxiety. Front Psychiatry 12:694610
Chen C, Yu X, Belkacem AN, Lu L, Li P, Zhang Z, Wang X, Tan W, Gao Q, Shin D et al (2021) EEG-based anxious states classification using affective BCI-based closed neurofeedback system. J Med Biol Eng 41(2):155–164
Shon D, Im K, Park J-H, Lim D-S, Jang B, Kim J-M (2018) Emotional stress state detection using genetic algorithm-based feature selection on EEG signals. Int J Environ Res Public Health 15(11):2461
Menezes MLR, Samara A, Galway L, Sant’Anna A, Verikas A, Alonso-Fernandez F, Wang H, Bond R (2017) Towards emotion recognition for virtual environments: an evaluation of EEG features on benchmark dataset. Pers Ubiquitous Comput 21(6):1003–1013
Duan L, Duan H, Qiao Y, Sha S, Qi S, Zhang X, Huang J, Huang X, Wang C (2020) Machine learning approaches for MDD detection and emotion decoding using EEG signals. Front Hum Neurosci 14:284
Alshorman O, Masadeh M, Heyat MBB, Akhtar F, Almahasneh H, Ashraf GM, Alexiou A (2021) Frontal lobe real-time EEG analysis using machine learning techniques for mental stress detection. J Integr Neurosci 21:20
Jebelli H, Hwang S, Lee S (2018) EEG-based workers’ stress recognition at construction sites. Autom Constr 93:315–324. https://doi.org/10.1016/j.autcon.2018.05.027
Doborjeh Z, Doborjeh M, Taylor T, Kasabov N, Wang GY, Siegert R, Sumich A (2019) Spiking neural network modelling approach reveals how mindfulness training rewires the brain. Sci Rep 9(1):1–15
Doborjeh Z, Doborjeh M, Crook-Rumsey M, Taylor T, Wang GY, Moreau D, Krägeloh C, Wrapson W, Siegert RJ, Kasabov N et al (2020) Interpretability of spatiotemporal dynamics of the brain processes followed by mindfulness intervention in a brain-inspired spiking neural network architecture. Sensors 20(24):7354
Chen L, Yan J, Chen J, Sheng Y, Xu Z, Mahmud M (2020) An event based topic learning pipeline for neuroimaging literature mining. Brain Inform 7(1):1–14
Lu B-L, Zhang L, Kwok J (eds) (2011) Neural information processing: 18th international conference, ICONIP 2011, Shanghai, China, November 13–17, 2011, Proceedings, Part I. Lecture notes in computer science, vol 7062. Springer, Berlin. https://doi.org/10.1007/978-3-642-24955-6.. Accessed 10 June 2022
Jenke R, Peer A, Buss M (2014) Feature extraction and selection for emotion recognition from EEG. IEEE Trans Affect Comput 5(3):327–339. https://doi.org/10.1109/TAFFC.2014.2339834
Shon D, Im K, Park J-H, Lim D-S, Jang B, Kim J-M (2018) Emotional stress state detection using genetic algorithm-based feature selection on EEG signals. Int J Environ Res Public Health 15(11):2461. https://doi.org/10.3390/ijerph15112461
Buzsaki G (2006) Rhythms of the brain. Oxford University Press, Oxford
Zyma I, Tukaev S, Seleznov I, Kiyono K, Popov A, Chernykh M, Shpenkov O (2019) Electroencephalograms during mental arithmetic task performance. Data 4(1):14. https://doi.org/10.3390/data4010014
Koldijk S, Neerincx MA, Kraaij W (2018) Detecting work stress in offices by combining unobtrusive sensors. IEEE Trans Affect Comput 9(2):227–239. https://doi.org/10.1109/TAFFC.2016.2610975
Ottesen C (2022) Stress classifier with AutoML. https://github.com/chriotte/wearable_stress_classification. Accessed 28 Mar 2022
Healey JA (2000) Wearable and automotive systems for affect recognition from physiology. Thesis, Massachusetts Institute of Technology. Accepted 24 Aug 2005. https://dspace.mit.edu/handle/1721.1/9067 Accessed 28 Mar 2022
Gramfort A, Luessi M, Larson E, Engemann DA, Strohmeier D, Brodbeck C, Goj R, Jas M, Brooks T, Parkkonen L, Hämäläinen MS (2013) MEG and EEG data analysis with MNE-Python. Front Neurosci 7(267):1–13. https://doi.org/10.3389/fnins.2013.00267
Newman MG, Szkodny LE, Llera SJ, Przeworski A (2011) A review of technology-assisted self-help and minimal contact therapies for anxiety and depression: is human contact necessary for therapeutic efficacy? Clin Psychol Rev 31(1):89–103. https://doi.org/10.1016/j.cpr.2010.09.008
Acknowledgements
The Higher Education Funding Council provides funding for the VRET study for England quality-related research (QR) awarded to Nottingham Trent University. Additionally, this work is supported by the AI-TOP (2020-1-UK01-KA201-079167) and DIVERSASIA (618615-EPP-1-2020-1-UKEPPKA2-CBHEJP) projects, supported by the European Commission under the Erasmus+ programme.
The authors would like to express their heartfelt gratitude to the scientists who kindly released the data from their experiments.
Author information
Authors and Affiliations
Contributions
All authors have contributed to, seen and approved the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The Business, Law and Social Sciences College Research Ethics Committee, Nottingham Trent University, UK, provided the ethical approval for datasets generation and analysis during this study. The ethics application number is 2017/82. Participants were informed about the study, and consent was collected before the experiment accordingly.Â
Consent for publication
All authors have seen and approved the current version of the paper.
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rahman, M.A., Brown, D.J., Mahmud, M. et al. Enhancing biofeedback-driven self-guided virtual reality exposure therapy through arousal detection from multimodal data using machine learning. Brain Inf. 10, 14 (2023). https://doi.org/10.1186/s40708-023-00193-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40708-023-00193-9