iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/S00521-023-09289-Z
FAformer: parallel Fourier-attention architectures benefits EEG-based affective computing with enhanced spatial information | Neural Computing and Applications Skip to main content

Advertisement

Log in

FAformer: parallel Fourier-attention architectures benefits EEG-based affective computing with enhanced spatial information

  • Review
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The balance of brain functional segregation (i.e., the process in specialized local subsystems) and integration (i.e., the process in global cooperation of the subsystems) is crucial for cognition in human beings, and many deep learning models have been used to evaluate the spatial information during EEG-based affective computing. However, acquiring the intrinsic spatial representation in the topology of EEG channels is still challenging. To further address the issue, we propose the FAformer to enhance spatial information in EEG signals with parallel-branch architectures based on a vision transformer (ViT). In the encoder, there is a branch that utilizes Adaptive Neural Fourier Operators (AFNO) to model global spatial patterns using the Fourier transform in the electrode channel dimension. The other branch utilizes multi-head self-attention (MSA) to explore the dependence of emotion on different channels, which is conducive to building key local networks. Additionally, a self-supervised learning (SSL) task of adaptive feature dissociation (AdaptiveFD) is developed to improve the distinctiveness of spatial features generated from the parallel branches and guarantee robustness in different subjects. FAformer achieves superior performance over the competitive models on the DREAMER and DEAP. Moreover, the rationality and hyperparameters analysis are conducted to demonstrate the effectiveness of the FAformer. Finally, the visualization of features reveals the spatial global connections and key local patterns during the deep learning process in FAformer, which benefits EEG-based affective computing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets used or analyzed during the current study are available as follows: DREAMER: https://www.embs.org/jbhi?s=DREAMER DEAP: http://www.eecs.qmul.ac.uk/mmv/datasets/deap/.

References

  1. Alhagry S, Fahmy AA, El-Khoribi RA (2017) Emotion recognition based on EEG using LSTM recurrent neural network. Int J Adv Comput Sci Appl 8(10):355–358

    Google Scholar 

  2. Li X, Song D, Zhang P, Yu G, Hou Y, Hu B (2016) Emotion recognition from multi-channel EEG data through convolutional recurrent neural network. In: 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp 352–359

  3. Almanza-Conejo O, Almanza-Ojeda DL, Contreras-Hernandez JL, Ibarra-Manzano MA (2023) Emotion recognition in EEG signals using the continuous wavelet transform and cnns. Neural Comput Appl 35(2):1409–1422

    Google Scholar 

  4. Bassett DS, Bullmore ET (2017) Small-world brain networks revisited. The Neurosci 23(5):499–516

    Google Scholar 

  5. Deco G, Tononi G, Boly M, Kringelbach ML (2015) Rethinking segregation and integration: contributions of whole-brain modelling. Nat Rev Neurosci 16(7):430–439

    CAS  PubMed  Google Scholar 

  6. Kastrati G, Thompson WH, Schiffler B, Fransson P, Jensen KB (2022) Brain network segregation and integration during painful thermal stimulation. Cereb Cortex 32(18):4039–4049

    PubMed  PubMed Central  Google Scholar 

  7. Wang R, Liu M, Cheng X, Wu Y, Hildebrandt A, Zhou C (2021) Segregation, integration, and balance of large-scale resting brain networks configure different cognitive abilities. Proc Nat Acad Sci 118(23):2022288118

    Google Scholar 

  8. Gao Z, Wang X, Yang Y, Li Y, Ma K, Chen G (2020) A channel-fused dense convolutional network for EEG-based emotion recognition. IEEE Trans Cognit Dev Syst 13(4):945–954

    Google Scholar 

  9. Bi J, Wang F, Yan X, Ping J, Wen Y (2022) Multi-domain fusion deep graph convolution neural network for EEG emotion recognition. Neural Comput Appl 34(24):22241–22255

    Google Scholar 

  10. Liu Y, Zhou Y, Zhang D (2022) Tct: Temporal and channel transformer for EEG-based emotion recognition. In: 2022 IEEE International Symposium on Computer-Based Medical Systems (CBMS), pp 366–371

  11. Wei Y, Liu Y, Li C, Cheng J, Song R, Chen X (2023) Tc-net: a transformer capsule network for EEG-based emotion recognition. Comput Biol Med 152:106463

    PubMed  Google Scholar 

  12. Kumari N, Anwar S, Bhattacharjee V (2022) Time series-dependent feature of EEG signals for improved visually evoked emotion classification using emotioncapsnet. Neural Comput Appl 34(16):13291–13303

    Google Scholar 

  13. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N (2021) An image is worth 16x16 words: transformers for image recognition at scale. In: 2021 International Conference on Learning Representations (ICLR), https://openreview.net/forum?id=YicbFdNTTy

  14. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inform Process Syst 30:5998–6008

    Google Scholar 

  15. Guo J-Y, Cai Q, An J-P, Chen P-Y, Ma C, Wan J-H, Gao Z-K (2022) A transformer based neural network for emotion recognition and visualizations of crucial EEG channels. Physica A 603:127700

    Google Scholar 

  16. Cole MW, Bassett DS, Power JD, Braver TS, Petersen SE (2014) Intrinsic and task-evoked network architectures of the human brain. Neuron 83(1):238–251

    CAS  PubMed  PubMed Central  Google Scholar 

  17. Du Y, Fu Z, Calhoun VD (2018) Classification and prediction of brain disorders using functional connectivity: promising but challenging. Front Neurosci 12:525

    PubMed  PubMed Central  Google Scholar 

  18. Finn ES, Shen X, Scheinost D, Rosenberg MD, Huang J, Chun MM, Papademetris X, Constable RT (2015) Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nat Neurosci 18(11):1664–1671

    CAS  PubMed  PubMed Central  Google Scholar 

  19. Gong Q, He Y (2015) Depression, neuroimaging and connectomics: a selective overview. Biol Psychiatry 77(3):223–235

    PubMed  Google Scholar 

  20. Noble S, Scheinost D, Finn ES, Shen X, Papademetris X, McEwen SC, Bearden CE, Addington J, Goodyear B, Cadenhead KS (2017) Multisite reliability of mr-based functional connectivity. Neuroimage 146:959–970

    PubMed  Google Scholar 

  21. Schmidt LA, Trainor LJ (2001) Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cognit Emot 15(4):487–500

    Google Scholar 

  22. Guibas J, Mardani M, Li Z, Tao A, Anandkumar A, Catanzaro B (2022) Efficient token mixing for transformers via adaptive fourier neural operators. In: 2022 International Conference on Learning Representations (ICLR), https://openreview.net/forum?id=EXHG-A3jlM

  23. Nie D, Wang X-W, Shi L-C, Lu B-L (2011) EEG-based emotion recognition during watching movies. In: 2011 International IEEE/EMBS Conference on Neural Engineering (NER), pp 667–670

  24. Zheng W-L, Lu B-L (2015) Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans Auton Ment Dev 7(3):162–175

    Google Scholar 

  25. Li D, Xie L, Chai B, Wang Z, Yang H (2022) Spatial-frequency convolutional self-attention network for EEG emotion recognition. Appl Soft Comput 122:108740

    Google Scholar 

  26. Phan T-D-T, Kim S-H, Yang H-J, Lee G-S (2021) EEG-based emotion recognition by convolutional neural network with multi-scale kernels. Sensors 21(15):5092

    ADS  PubMed  PubMed Central  Google Scholar 

  27. Katsigiannis S, Ramzan N (2017) Dreamer: a database for emotion recognition through EEG and ecg signals from wireless low-cost off-the-shelf devices. IEEE J Biomed Health Inform 22(1):98–107

    PubMed  Google Scholar 

  28. Koelstra S, Muhl C, Soleymani M, Lee J-S, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I (2011) Deap: a database for emotion analysis; using physiological signals. IEEE Trans Affect Comput 3(1):18–31

    Google Scholar 

  29. Salama ES, El-Khoribi RA, Shoman ME, Shalaby MAW (2018) EEG-based emotion recognition using 3d convolutional neural networks. International Journal of Advanced Computer Science and Applications 9(8):329

    Google Scholar 

  30. Zhong P, Wang D, Miao C (2020) EEG-based emotion recognition using regularized graph neural networks. IEEE Trans Affect Comput 13(3):1290–1301

    Google Scholar 

  31. Yang Y, Wu Q, Qiu M, Wang Y, Chen X (2018) Emotion recognition from multi-channel EEG through parallel convolutional recurrent neural network. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp 1–7

  32. He Y, Lu Z, Wang J, Shi J (2022) A channel attention based mlp-mixer network for motor imagery decoding with EEG. In: 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 1291–1295

  33. Kwon Y-H, Shin S-B, Kim S-D (2018) Electroencephalography based fusion two-dimensional (2d)-convolution neural networks (cnn) model for emotion recognition system. Sensors 18(5):1383

    ADS  PubMed  PubMed Central  Google Scholar 

  34. Li D, Chai B, Wang Z, Yang H, Du W (2021) EEG emotion recognition based on 3-d feature representation and dilated fully convolutional networks. IEEE Trans Cognit Dev Syst 13(4):885–897

    ADS  Google Scholar 

  35. Hu D (2020) An introductory survey on attention mechanisms in nlp problems. In: 2019 Intelligent Systems Conference (IntelliSys) Vol 2, pp 432–448

  36. Daniluk M, Rocktäschel T, Welbl J, Riedel S (2017) Frustratingly short attention spans in neural language modeling. In: 2017 International Conference on Learning Representations (ICLR), https://openreview.net/forum?id=ByIAPUcee

  37. Kim Y, Choi A (2020) EEG-based emotion classification using long short-term memory network with attention mechanism. Sensors 20(23):6727

    ADS  PubMed  PubMed Central  Google Scholar 

  38. Shen L, Sun M, Li Q, Li B, Pan Z, Lei J (2022) Multiscale temporal self-attention and dynamical graph convolution hybrid network for EEG-based stereogram recognition. IEEE Trans Neural Syst Rehabil Eng 30:1191–1202

    PubMed  Google Scholar 

  39. Jia Z, Lin Y, Wang J, Zhou R, Ning X, He Y, Zhao Y (2020) Graphsleepnet: adaptive spatial-temporal graph convolutional networks for sleep stage classification. In: 2020 International Joint Conference on Artificial Intelligence (IJCAI), pp 1324–1330

  40. Ding C, Liao S, Wang Y, Li Z, Liu N, Zhuo Y, Wang C, Qian X, Bai Y, Yuan G (2017) Circnn: accelerating and compressing deep neural networks using block-circulant weight matrices. In: 2017 Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pp 395–408

  41. Lee J-H, Heo M, Kim K-R, Kim C-S (2018) Single-image depth estimation based on fourier domain analysis. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 330–339

  42. Li S, Xue K, Zhu B, Ding C, Gao X, Wei D, Wan T (2020) Falcon: a fourier transform based approach for fast and secure convolutional neural network predictions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 8705–8714

  43. Yang Y, Soatto S (2020) Fda: Fourier domain adaptation for semantic segmentation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 4085–4095

  44. Lee-Thorp J, Ainslie J, Eckstein I, Ontanon S (2022) Fnet: mixing tokens with fourier transforms. In: 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). https://doi.org/10.18653/v1/2022.naacl-main.319

  45. Rao Y, Zhao W, Zhu Z, Lu J, Zhou J (2021) Global filter networks for image classification. Adv Neural Inform Process Syst 34:980–993

    Google Scholar 

  46. Gidaris S, Bursuc A, Komodakis N, Pérez P, Cord M (2019) Boosting few-shot visual learning with self-supervision. In: 2019 IEEE/CVF International Conference on Computer Vision (CVPR), pp 8059–8068

  47. Gidaris S, Singh P, Komodakis N (2018) Unsupervised representation learning by predicting image rotations. In: 2018 International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=S1v4N2l0-

  48. Sermanet P, Lynch C, Chebotar Y, Hsu J, Jang E, Schaal S, Levine S, Brain G (2018) Time-contrastive networks: Self-supervised learning from video. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp 1134–1141

  49. Yuan Y, Lin L (2020) Self-supervised pretraining of transformers for satellite image time series classification. IEEE J Select Top Appl Earth Observ Remote Sens 14:474–487

    ADS  Google Scholar 

  50. Ma Q, Li S, Zhuang W, Wang J, Zeng D (2020) Self-supervised time series clustering with model-based dynamics. IEEE Trans Neural Netw Learn Syst 32(9):3942–3955

    MathSciNet  Google Scholar 

  51. Banville H, Albuquerque I, Hyvärinen A, Moffat G, Engemann D-A, Gramfort A (2019) Self-supervised representation learning from electroencephalography signals. In: 2019 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pp 1–6

  52. Xi L, Yun Z, Liu H, Wang R, Huang X, Fan H (2022) Semi-supervised time series classification model with self-supervised learning. Eng Appl Artif Intell 116:105331

    Google Scholar 

  53. Jin M, Chen H, Li Z, Li J (2021) EEG-based emotion recognition using graph convolutional network with learnable electrode relations. In: 2021 Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp 5953–5957

  54. Song T, Zheng W, Song P, Cui Z (2018) EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Trans Affect Comput 11(3):532–541

    Google Scholar 

  55. Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: 2015 International Conference on Learning Representations (ICLR),

  56. Li Y, Huang J, Zhou H, Zhong N (2017) Human emotion recognition with electroencephalographic multidimensional features by hybrid deep neural networks. Appl Sci 7(10):1060

    Google Scholar 

  57. Hendrycks D, Gimpel K (2016) Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arxiv:1606.08415

  58. Yang F-E, Cheng Y-C, Shiau Z-Y, Wang Y-CF (2021) Adversarial teacher-student representation learning for domain generalization. Adv Neural Inform Process Syst 34:19448–19460

    Google Scholar 

  59. Cheng J, Chen M, Li C, Liu Y, Song R, Liu A, Chen X (2020) Emotion recognition from multi-channel EEG via deep forest. IEEE J Biomed Health Inform 25(2):453–464

    Google Scholar 

  60. Tao W, Li C, Song R, Cheng J, Liu Y, Wan F, Chen X (2020) EEG-based emotion recognition via channel-wise attention and self-attention. IEEE Trans Affect Comput 14(1):382–393

    Google Scholar 

  61. Zhang D, Yao L, Chen K, Monaghan J (2019) A convolutional recurrent attention model for subject-independent EEG signal analysis. IEEE Signal Process Lett 26(5):715–719

    ADS  Google Scholar 

  62. Huang D, Chen S, Liu C, Zheng L, Tian Z, Jiang D (2021) Differences first in asymmetric brain: a bi-hemisphere discrepancy convolutional neural network for EEG emotion recognition. Neurocomputing 86:140–151

    Google Scholar 

  63. Wu Y, Xia M, Nie L, Zhang Y, Fan A (2022) Simultaneously exploring multi-scale and asymmetric EEG features for emotion recognition. Comput Biol Med 149:106002

    PubMed  Google Scholar 

  64. Peng Y, Dalmia S, Lane I, Watanabe S (2022) Branchformer: Parallel mlp-attention architectures to capture local and global context for speech recognition and understanding. In: 2022 International Conference on Machine Learning (ICML), pp 17627–17643

  65. Bai J, Wang W, Gomes CP (2021) Contrastively disentangled sequential variational autoencoder. Adv Neural Inform Process Syst 34:10105–10118

    Google Scholar 

  66. Zhu Y, Min MR, Kadav A, Graf HP (2020) S3vae: self-supervised sequential vae for representation disentanglement and data generation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 6538–6547

  67. Özerdem MS, Polat H (2017) Emotion recognition based on EEG features in movie clips with channel selection. Brain Inform 4(4):241–252

    PubMed  PubMed Central  Google Scholar 

  68. Tong L, Zhao J, Fu W (2018) Emotion recognition and channel selection based on EEG signal. In: 2018 International Conference on Intelligent Computation Technology and Automation (ICICTA), pp 101–105

Download references

Funding

This work is supported by the National Natural Science Foundation of China (62173008, 61602017), Beijing Natural Science Foundation (No. 4222022), and the Education and Teaching Research Project of Beijing University of Technology (ER2022SJB06).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haiyan Zhou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, Z., Huang, J., Chen, J. et al. FAformer: parallel Fourier-attention architectures benefits EEG-based affective computing with enhanced spatial information. Neural Comput & Applic 36, 3903–3919 (2024). https://doi.org/10.1007/s00521-023-09289-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09289-z

Keywords

Navigation