Recognition and analysis of human emotions have attracted a lot of interest in the past two decades and have been researched extensively in neuroscience, psychology, cognitive sciences, and computer sciences. Most of the past research in machine analysis of human emotion has focused on recognition of prototypic expressions of six basic emotions based on data that has been posed on demand and acquired in laboratory settings. More recently, there has been a shift toward recognition of affective displays recorded in naturalistic settings as driven by real world applications. This shift in affective computing research is aimed toward subtle, continuous, and context-specific interpretations of affective displays recorded in real-world settings and toward combining multiple modalities for analysis and recognition of human emotion. Accordingly, this article explores recent advances in dimensional and continuous affect modeling, sensing, and automatic recognition from visual, audio, tactile, and brain-wave modalities.<\/p>","DOI":"10.4018\/jse.2010101605","type":"journal-article","created":{"date-parts":[[2010,4,16]],"date-time":"2010-04-16T18:14:21Z","timestamp":1271441661000},"page":"68-99","source":"Crossref","is-referenced-by-count":298,"title":["Automatic, Dimensional and Continuous Emotion Recognition"],"prefix":"10.4018","volume":"1","author":[{"given":"Hatice","family":"Gunes","sequence":"first","affiliation":[{"name":"Imperial College London, UK"}]},{"given":"Maja","family":"Pantic","sequence":"additional","affiliation":[{"name":"Imperial College London, UK and University of Twente, EEMCS, The Netherlands"}]}],"member":"2432","reference":[{"key":"jse.2010101605-0","doi-asserted-by":"publisher","DOI":"10.1016\/S0167-8760(03)00156-9"},{"key":"jse.2010101605-1","doi-asserted-by":"publisher","DOI":"10.1023\/B:NEAB.0000038139.39812.eb"},{"key":"jse.2010101605-2","doi-asserted-by":"publisher","DOI":"10.1111\/j.0956-7976.2005.01548.x"},{"key":"jse.2010101605-3","doi-asserted-by":"publisher","DOI":"10.1037\/0033-2909.111.2.256"},{"key":"jse.2010101605-4","unstructured":"Argyle, M. (1975). Bodily communication. London: Methuen."},{"key":"jse.2010101605-5","unstructured":"Arroyo-Palacios, J., & Romano, D. M. (2008, August). Towards a standardization in the use of physiological signals for affective recognition systems. In Proceedings of Measuring Behavior 2008, Maastricht, The Netherlands (pp. 121-124). Noldus."},{"key":"jse.2010101605-6","doi-asserted-by":"crossref","unstructured":"Banziger, T., & Scherer, K. R. (2007, September) Using actor portrayals to systematically study multimodal emotion expression: The gemep corpus. In A. Paiva, R. Prada, & R. W. Picard (Eds.), Affective Computing and Intelligent Interaction: Proceedings of the 2nd International Conference on Affective Computing and Intelligent Interaction, Lisbon, Portugal (LNCS 4738, pp. 476-487).","DOI":"10.1007\/978-3-540-74889-2_42"},{"key":"jse.2010101605-7","unstructured":"Baron-Cohen, S., & Tead, T. H. E. (2003) Mind reading: The interactive guide to emotion. London: Jessica Kingsley Publishers."},{"key":"jse.2010101605-8","doi-asserted-by":"publisher","DOI":"10.1016\/S0167-6393(02)00079-1"},{"key":"jse.2010101605-9","doi-asserted-by":"publisher","DOI":"10.3758\/BRM.40.2.531"},{"key":"jse.2010101605-10","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2003.04.001"},{"key":"jse.2010101605-11","doi-asserted-by":"publisher","DOI":"10.1016\/S1071-5819(03)00018-1"},{"key":"jse.2010101605-12","doi-asserted-by":"publisher","DOI":"10.1177\/0261927X94134002"},{"key":"jse.2010101605-13","unstructured":"Campbell, N., & Mokhtari, P. (2003, August). Voice quality: The 4th prosodic dimension. In Proceedings of the International Congress of Phonetic Sciences, Barcelona (pp. 2417-2420)."},{"key":"jse.2010101605-14","doi-asserted-by":"publisher","DOI":"10.1037\/1528-3542.2.2.179"},{"key":"jse.2010101605-15","doi-asserted-by":"crossref","unstructured":"Camurri, A., Mazzarino, B., & Volpe, G. (2003, April) Analysis of expressive gesture: The EyesWeb expressive gesture processing library. In Proceedings of the Gesture Workshop, Genova, Italy (pp. 460-467).","DOI":"10.1007\/978-3-540-24598-8_42"},{"key":"jse.2010101605-16","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2007.11.043"},{"key":"jse.2010101605-17","doi-asserted-by":"crossref","unstructured":"Caridakis, G., Malatesta, L., Kessous, L., Amir, N., Paouzaiou, A., & Karpouzis, K. (2006, November). Modelling naturalistic affective states via facial and vocal expression recognition. In Proceedings 8th ACM International Conference on Multimodal Interfaces (ICMI \u201906), Banff, Alberta, Canada (pp. 146-154). ACM Publishing.","DOI":"10.1145\/1180995.1181029"},{"key":"jse.2010101605-18","doi-asserted-by":"crossref","unstructured":"Chanel, G., Ansari-Asl, K., & Pun, T. (2007, October). Valence-arousal evaluation using physiological signals in an emotion recall paradigm. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Montreal, Quebec, Canada (pp. 2662-2667). Washington, DC: IEEE Computer Society.","DOI":"10.1109\/ICSMC.2007.4413638"},{"key":"jse.2010101605-19","unstructured":"Chanel, G., Kronegg, J., Grandjean, D., & Pun, T. (2002). Emotion assessment: Arousal evaluation using EEG\u2019s and peripheral physiological signals (Tech. Rep. 05.02). Geneva, Switzerland: Computer Vision Group, Computing Science Center, University of Geneva."},{"key":"jse.2010101605-20","doi-asserted-by":"crossref","unstructured":"Changchun, L., Rani, P., & Sarkar, N. (2005, August). An empirical study of machine learning techniques for affect recognition in human-robot interaction. In Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Canada (pp. 2662-2667). Washington, DC: IEEE Computer Society.","DOI":"10.1109\/IROS.2005.1545344"},{"key":"jse.2010101605-21","unstructured":"Conati, C., Chabbal, R., & Maclaren, H. A. (2003, June). Study on using biometric sensors for monitoring user emotions in educational games. Paper presented at the Workshop on Assessing and Adapting to User Attitudes and Affect: Why, When and How? User Modelling (UM-03), Johnstown, PA."},{"key":"jse.2010101605-22","unstructured":"Corradini, A., Mehta, M., Bernsen, N. O., & Martin, J.-C. (2003, August). Multimodal input fusion in human computer interaction on the example of the on-going nice project. In Proceedings of the NATO: Asi Conference on Data Fusion for Situation Monitoring, Incident Detection, Alert and Response Management, Tsakhkadzor, Armenia (pp. 223-234)."},{"key":"jse.2010101605-23","doi-asserted-by":"publisher","DOI":"10.1023\/B:JONB.0000023655.25550.be"},{"key":"jse.2010101605-24","unstructured":"Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., & Schroder, M. (2000, September). \u2018FEELTRACE\u2019: An instrument for recording perceived emotion in real time. In Proceedings of the ISCA Workshop on Speech and Emotion, Belfast, Northern Ireland (pp. 19-24)."},{"key":"jse.2010101605-25","doi-asserted-by":"publisher","DOI":"10.1109\/79.911197"},{"key":"jse.2010101605-26","doi-asserted-by":"crossref","unstructured":"Darwin, C. (1998). The expression of the emotions in man and animals (3rd ed.). New York: Oxford University Press.","DOI":"10.1093\/oso\/9780195112719.001.0001"},{"key":"jse.2010101605-27","unstructured":"Davitz, J. (1964). Auditory correlates of vocal expression of emotional feeling. In J. Davitz (Ed.), The communication of emotional meaning (pp. 101-112). New York: McGraw-Hill."},{"key":"jse.2010101605-28","unstructured":"De Silva, P. R. S., Osano, M., Marasinghe, A., & Madurapperuma, A. P. (2006, April). Towards recognizing emotion with affective dimensions through body gestures. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK (pp. 269-274)."},{"key":"jse.2010101605-29","doi-asserted-by":"publisher","DOI":"10.1037\/0033-2909.129.1.74"},{"key":"jse.2010101605-30","unstructured":"Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, O., McRorie, M., et al. (2007, September). The HUMAINE Database: addressing the needs of the affective computing community. In Affective Computing and Intelligent Interaction: Proceedings of the 2nd International Conference on Affective Computing and Intelligent Interaction, Lisbon, Portugal (LNCS 4738, pp. 488-500)."},{"key":"jse.2010101605-31","unstructured":"Dreuw, P., Deselaers, T., Rybach, D., Keysers, D., & Ney, H. (2006, April). Tracking using dynamic programming for appearance-based sign language recognition. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Southampton, UK (pp. 293-298). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-32","doi-asserted-by":"publisher","DOI":"10.1016\/S0960-9822(00)00740-5"},{"key":"jse.2010101605-33","unstructured":"Ekman, P. (1982). Emotion in the human face. Cambridge, UK: Cambridge University Press."},{"key":"jse.2010101605-34","doi-asserted-by":"crossref","first-page":"105","DOI":"10.1196\/annals.1280.010","article-title":"Darwin, deception, and facial expression.","volume":"1000","author":"P.Ekman","year":"2003","journal-title":"Annals of the New York Academy of Sciences"},{"key":"jse.2010101605-35","doi-asserted-by":"crossref","first-page":"711","DOI":"10.2466\/pms.1967.24.3.711","article-title":"Head and body cues in the judgment of emotion: A reformulation.","volume":"24","author":"P.Ekman","year":"1967","journal-title":"Perceptual and Motor Skills"},{"key":"jse.2010101605-36","unstructured":"Ekman, P., & Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions from facial clues. Englewood Cliffs, NJ: Prentice-Hall."},{"key":"jse.2010101605-37","doi-asserted-by":"crossref","unstructured":"Ekman, P., & Friesen, W. V. (1978). Facial action coding system: A technique for the measurement of facial movement. Palo Alto, CA: Consulting Psychologists Press.","DOI":"10.1037\/t27734-000"},{"key":"jse.2010101605-38","unstructured":"Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial action coding system. Salt Lake City, UT: A Human Face."},{"key":"jse.2010101605-39","unstructured":"El Kaliouby, R., & Robinson, P. (2005, June 27-July 2). Real-time inference of complex mental states from facial expressions and head gestures. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW 2004), Washington, DC (Vol. 10, pp. 154). Washington, DC: IEEE Computer Society.El Kaliouby, R., & Teeters, A. (2007, November). Eliciting, capturing and tagging spontaneous facial affect in autism spectrum disorder. In Proceedings of the 9th International Conference on Multimodal Interfaces, Nagoya, Japan (pp. 46-53)."},{"key":"jse.2010101605-40","unstructured":"Elgammal, A., Shet, V., Yacoob, Y., & Davis, L. S. (2003, June). Learning dynamics for exemplar-based gesture recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Madison, WI (pp. 571-578). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-41","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2004.07.014"},{"key":"jse.2010101605-42","unstructured":"Forbes-Riley, K., & Litman, D. (2004, May). Predicting emotion in spoken dialogue from multiple knowledge sources. In Proceedings of the Human Language Technology Conference North America Chapter of the Association for Computational Linguistics (HLT-NAACL 2004), Boston (pp. 201-208)."},{"key":"jse.2010101605-43","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2005.03.006"},{"key":"jse.2010101605-44","unstructured":"Friesen, W. V., & Ekman, P. (1984). EMFACS-7: Emotional facial action coding system (unpublished manual). San Francisco: University of California, San Francisco."},{"key":"jse.2010101605-45","doi-asserted-by":"crossref","unstructured":"Glowinski, D., Camurri, A., Volpe, G., Dael, N., & Scherer, K. (2008, June). Technique for automatic emotion recognition by body gesture analysis. In Proceedings of the 2008 Computer Vision and Pattern Recognition Workshops, Anchorage, AK (pp. 1-6). Washington, DC: IEEE Computer Society.","DOI":"10.1109\/CVPRW.2008.4563173"},{"key":"jse.2010101605-46","doi-asserted-by":"publisher","DOI":"10.1016\/j.concog.2008.03.019"},{"key":"jse.2010101605-47","doi-asserted-by":"crossref","unstructured":"Grimm, M., Kroschel, K., & Narayanan, S. (2008, June). The Vera am Mittag German audio-visual emotional speech database. In Proceedings of the IEEE International Conference on Multimedia and Expo, Hannover, Germany (pp. 865-868). Washington, DC: IEEE Computer Society.","DOI":"10.1109\/ICME.2008.4607572"},{"key":"jse.2010101605-48","unstructured":"Gross, M. M., Gerstner, G. E., Koditschek, D. E., Fredrickson, B. L., & Crane, E. A. (2006). Emotion recognition from body movement kinematics. Retrieved from http:\/\/sitemaker.umich.edu\/mgrosslab\/files\/abstract.pdf"},{"key":"jse.2010101605-49","doi-asserted-by":"crossref","unstructured":"Gunes, H., & Piccardi, M. (2006, October). Creating and annotating affect databases from face and body display: A contemporary survey. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan (pp. 2426-2433).","DOI":"10.1109\/ICSMC.2006.385227"},{"key":"jse.2010101605-50","doi-asserted-by":"crossref","unstructured":"Gunes, H., & Piccardi, M. (2008). From mono-modal to multi-modal: Affect recognition using visual modalities. In D. Monekosso, P. Remagnino, & Y. Kuno (Eds.), Ambient intelligence techniques and applications (pp. 154-182). Berlin, Germany: Springer-Verlag.","DOI":"10.1007\/978-1-84800-346-0_10"},{"key":"jse.2010101605-51","doi-asserted-by":"crossref","unstructured":"Gunes, H., & Piccardi, M. (2009). Automatic temporal segment detection and affect recognition from face and body display. IEEE Transactions on Systems, Man, and Cybernetics \u2013 Part B, 39(1), 64-84.","DOI":"10.1109\/TSMCB.2008.927269"},{"key":"jse.2010101605-52","doi-asserted-by":"crossref","unstructured":"Gunes, H., Piccardi, M., & Pantic, M. (2008). From the lab to the real world: Affect recognition using multiple cues and modalities. In Jimmy Or (Ed.), Affective computing, focus on emotion expression, synthesis and recognition (pp. 185-218). Vienna, Austria: I-Tech Education and Publishing.","DOI":"10.5772\/6180"},{"key":"jse.2010101605-53","doi-asserted-by":"crossref","unstructured":"Haag, A., Goronzy, S., Schaich, P., & Williams, J. (2004, June). Emotion recognition using bio-sensors: First steps towards an automatic system. In E. Andr\u00e9, L. Dybkj\u00e6r, W. Minker, & P. Heisterkamp (Eds.), Affective Dialogue Systems: Tutorial and Research Workshop (ADS 2004), Kloster Irsee, Germany (LNCS 3068, pp. 36-48).","DOI":"10.1007\/978-3-540-24842-2_4"},{"key":"jse.2010101605-54","doi-asserted-by":"publisher","DOI":"10.1016\/j.cub.2003.11.049"},{"key":"jse.2010101605-55","doi-asserted-by":"crossref","unstructured":"Jin, X., & Wang, Z. (2005, October). An emotion space model for recognition of emotions in spoken chinese. In Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction (ACII 2005), Beijing, China, (pp. 397-402).","DOI":"10.1007\/11573548_51"},{"key":"jse.2010101605-56","doi-asserted-by":"crossref","unstructured":"Juslin, P. N., & Scherer, K. R. (2005). Vocal expression of affect. In J. Harrigan, R. Rosenthal, & K. Scherer (Eds.), The new handbook of methods in nonverbal behavior research (pp. 65-135). Oxford, UK: Oxford University Press.","DOI":"10.1093\/acprof:oso\/9780198529620.003.0003"},{"key":"jse.2010101605-57","unstructured":"Karpouzis, K., Caridakis, G., Kessous, L., Amir, N., Raouzaiou, A., Malatesta, L., et al. (2007, November). Modelling naturalistic affective states via facial, vocal and bodily expressions recognition. In J. G. Carbonell & J. Siekmann (Eds.), Artifical Intelligence for Human Computing: ICMI 2006 and IJCAI 2007 International Workshops, Banff, Canada (LNAI 4451, pp. 92-116)."},{"key":"jse.2010101605-58","unstructured":"Keltner, D., & Ekman, P. (2000). Facial expression of emotion. In M. Lewis & J. M. Haviland-Jones (Eds.), Handbook of emotions (pp. 236-249). New York: Guilford Press."},{"key":"jse.2010101605-59","doi-asserted-by":"publisher","DOI":"10.1145\/1152934.1152939"},{"key":"jse.2010101605-60","doi-asserted-by":"crossref","unstructured":"Khan, M. M., Ward, R. D., & Ingleby, M. (2006, June). Infrared thermal sensing of positive and negative affective states. In Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics, Bangkok, Thailand (pp. 1-6). Washington, DC: IEEE Computer Society.","DOI":"10.1109\/RAMECH.2006.252608"},{"key":"jse.2010101605-61","doi-asserted-by":"publisher","DOI":"10.1145\/1462055.1462061"},{"key":"jse.2010101605-62","doi-asserted-by":"crossref","unstructured":"Kim, J. (2007). Bimodal emotion recognition using speech and physiological changes. In M. Grimm, K. Kroschel (Eds.), Robust speech recognition and understanding (pp. 265-280). Vienna, Austria: I-Tech Education and Publishing.","DOI":"10.5772\/4754"},{"key":"jse.2010101605-63","doi-asserted-by":"publisher","DOI":"10.1109\/34.667881"},{"key":"jse.2010101605-64","doi-asserted-by":"crossref","unstructured":"Kleinsmith, A., & Bianchi-Berthouze, N. (2007, September). Recognizing affective dimensions from body posture. In Affective Computing and Intelligent Interaction: 2nd International Conference, Lisbon, Portugal (LNCS 4738, pp. 48-58).","DOI":"10.1007\/978-3-540-74889-2_5"},{"key":"jse.2010101605-65","doi-asserted-by":"crossref","unstructured":"Kleinsmith, A., Ravindra De Silva, P., & Bianchi-Berthouze, N. (2005, October) Grounding affective dimensions into posture features. In Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction (ACII 2005), Beijing, China (pp. 263-270).","DOI":"10.1007\/11573548_34"},{"key":"jse.2010101605-66","doi-asserted-by":"publisher","DOI":"10.1016\/j.intcom.2006.04.003"},{"key":"jse.2010101605-67","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2007.904899"},{"key":"jse.2010101605-68","doi-asserted-by":"publisher","DOI":"10.1109\/TSA.2004.838534"},{"key":"jse.2010101605-69","unstructured":"Levenson, R. W. (1988). Emotion and the autonomic nervous system: A prospectus for research on autonomic specificity. In H. L. Wagner (Ed.), Social psychophysiology and emotion: Theory and clinical applications (pp. 17-42). New York: John Wiley & Sons"},{"key":"jse.2010101605-70","unstructured":"Lienhart, R., & Maydt, J. (2002, September). An extended set of hair-like features for rapid object detection. In Proceedings of the IEEE International Conference on Image Processing, New York (Vol. 1, pp. 900-903). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-71","doi-asserted-by":"crossref","unstructured":"Littlewort, G. C., Bartlett, M. S., & Lee, K. (2007, November). Faces of pain: Automated measurement of spontaneous facial expressions of genuine and posed pain. In Proceedings of the 9th International Conference on Multimodal Interfaces, Nagoya, Japan (pp. 15-21). ACM Publishing.","DOI":"10.1145\/1322192.1322198"},{"key":"jse.2010101605-72","doi-asserted-by":"publisher","DOI":"10.1007\/s00779-007-0167-y"},{"key":"jse.2010101605-73","unstructured":"Mehrabian, A., & Russell, J. (1974). An approach to environmental psychology. Cambridge, MA: MIT Press."},{"key":"jse.2010101605-74","unstructured":"Nakasone, A., Prendinger, H., & Ishizuka, M. (2005, September). Emotion recognition from electromyography and skin conductance. In Proceedings of the 5th International Workshop on Biosignal Interpretation, Tokyo (pp. 219-222)."},{"key":"jse.2010101605-75","doi-asserted-by":"publisher","DOI":"10.1016\/j.physbeh.2005.03.009"},{"key":"jse.2010101605-76","unstructured":"Ning, H., Han, T. X., Hu, Y., Zhang, Z., Fu, Y., & Huang, T. S. (2006, April). A real-time shrug detector. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Southampton, UK (pp. 505-510). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-77","doi-asserted-by":"publisher","DOI":"10.1037\/0033-295X.97.3.315"},{"key":"jse.2010101605-78","unstructured":"Osgood, C., Suci, G., & Tannenbaum, P. (1957). The measurement of meaning. Chicago: University of Illinois Press."},{"key":"jse.2010101605-79","doi-asserted-by":"publisher","DOI":"10.1109\/TSP.2003.822353"},{"key":"jse.2010101605-80","doi-asserted-by":"crossref","unstructured":"Pantic, M., & Bartlett, M. S. (2007). Machine analysis of facial expressions. In K. Delac & M. Grgic (Eds.), Face recognition (pp. 377-416). Vienna, Austria: I-Tech Education and Publishing.","DOI":"10.5772\/4847"},{"key":"jse.2010101605-81","doi-asserted-by":"publisher","DOI":"10.1504\/IJAACS.2008.019799"},{"key":"jse.2010101605-82","doi-asserted-by":"crossref","unstructured":"Pantic, M., Pentland, A., Nijholt, A., & Huang, T. (2007). Machine understanding of human behaviour. In Artifical Intelligence for Human Computing (LNAI 4451, pp. 47-71).","DOI":"10.1007\/978-3-540-72348-6_3"},{"key":"jse.2010101605-83","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2003.817122"},{"key":"jse.2010101605-84","unstructured":"Pavlidis, I. T., Levine, J., & Baukol, P. (2001, October). Thermal image analysis for anxiety detection. In Proceedings of the International Conference on Image Processing, Thessaloniki, Greece (Vol. 2, pp. 315-318). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-85","doi-asserted-by":"publisher","DOI":"10.1109\/34.954607"},{"key":"jse.2010101605-86","unstructured":"Plutchik, R. (1984). Emotions: A general psychoevolutionary theory. In K. Scherer & P. Ekman (Eds.), Approaches to emotion (pp. 197-219). Hillsdale, NJ: Lawrence Erlbaum Associates."},{"key":"jse.2010101605-87","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2006.10.016"},{"key":"jse.2010101605-88","doi-asserted-by":"publisher","DOI":"10.1109\/TNSRE.2006.875544"},{"key":"jse.2010101605-89","doi-asserted-by":"crossref","unstructured":"Puri, C., Olson, L., Pavlidis, I., Levine, J., & Starren, J. (2005, April). StressCam: Non-contact measurement of users\u2019 emotional states through thermal imaging. In Proceedings of the Conference on Human Factors in Computing Systems (CHI 2005), Portland, OR (pp. 1725-1728). ACM Publishing.","DOI":"10.1145\/1056808.1057007"},{"key":"jse.2010101605-90","doi-asserted-by":"crossref","unstructured":"Riseberg, J., Klein, J., Fernandez, R., & Picard, R. W. (1998, April). Frustrating the user on purpose: Using biosignals in a pilot study to detect the user\u2019s emotional state. In Proceedings of the Conference on Human Factors in Computing Systems (CHI 1998), Los Angeles (pp. 227-228). ACM Publishing.","DOI":"10.1145\/286498.286715"},{"key":"jse.2010101605-91","doi-asserted-by":"publisher","DOI":"10.1037\/h0077714"},{"key":"jse.2010101605-92","doi-asserted-by":"crossref","unstructured":"Russell, J. A. (1997). Reading emotions from and into faces: resurrecting a dimensional contextual perspective. In J. A. Russell & J. M. Fernandez-Dols (Eds.), The psychology of facial expression (pp. 295-320). New York: Cambridge University Press.","DOI":"10.1017\/CBO9780511659911.015"},{"key":"jse.2010101605-93","doi-asserted-by":"crossref","unstructured":"Russell, J. A., & Fern\u00e1ndez-Dols, J. M. (Eds.). (1997). The psychology of facial expression. New York: Cambridge University Press.","DOI":"10.1017\/CBO9780511659911"},{"key":"jse.2010101605-94","unstructured":"Savran, A., Ciftci, K., Chanel, G., Mota, J. C., Viet, L. H., Sankur, B., et al. (2006, July 17-August 11). Emotion detection in the loop from brain signals and facial images. In Proceedings of eNTERFACE 2006, Dubrovnik, Croatia. Retrieved from http:\/\/www.enterface.net"},{"key":"jse.2010101605-95","unstructured":"Scherer, K. R. (2000). Psychological models of emotion. In J. Borod (Ed.), The neuropsychology of emotion (pp. 137-162). New York: Oxford University Press."},{"key":"jse.2010101605-96","doi-asserted-by":"crossref","unstructured":"Scherer, K. R., Schorr, A., & Johnstone, T. (Eds.). (2001). Appraisal processes in emotion: Theory, methods, research. New York: Oxford University Press.","DOI":"10.1093\/oso\/9780195130072.001.0001"},{"key":"jse.2010101605-97","doi-asserted-by":"publisher","DOI":"10.1002\/ajpa.20001"},{"key":"jse.2010101605-98","doi-asserted-by":"crossref","unstructured":"Shan, C., Gong, S., & McOwan, P. W. (2007, September). Beyond facial expressions: Learning human emotion from body gestures. Paper presented at the British Machine Vision Conference, Warwick, UK.","DOI":"10.5244\/C.21.43"},{"key":"jse.2010101605-99","doi-asserted-by":"crossref","unstructured":"Shin, Y. (2007, May). Facial expression recognition based on emotion dimensions on manifold learning. In Proceedings of International Conference on Computational Science, Beijing, China (Vol. 2, pp. 81-88).","DOI":"10.1007\/978-3-540-72586-2_11"},{"key":"jse.2010101605-100","unstructured":"Takahashi, K. (2004, December). Remarks on emotion recognition from multi-modal bio-potential signals. In Proceedings of the IEEE International Conference on Industrial Technology, Hammamet, Tunisia (pp. 1138-1143)."},{"key":"jse.2010101605-101","unstructured":"Tian, Y. L., Kanade, T., & Cohn, J. F. (2002, May). Evaluation of gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In Proceedings of the IEEE International Conference on Automaitc Face and Gesture Recognition, Washington, DC (pp. 218-223). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-102","unstructured":"Tomkins, S. S. (1962). Affect, imagery, consciousness: Vol. 1. The positive affects. New York: Springer."},{"key":"jse.2010101605-103","unstructured":"Tomkins, S. S. (1963). Affect, imagery, consciousness. Vol. 2: The negative affects. New York: Springer."},{"key":"jse.2010101605-104","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-006-6106-y"},{"key":"jse.2010101605-105","doi-asserted-by":"crossref","unstructured":"Valstar, M. F., Gunes, H., & Pantic, M. (2007). How to distinguish posed from spontaneous smiles using geometric features. In Proceedings of the 9th International Conference on Multimodal Interfaces, Nagoya, Japan (pp. 38-45). ACM Publishing.","DOI":"10.1145\/1322192.1322202"},{"key":"jse.2010101605-106","doi-asserted-by":"crossref","unstructured":"Valstar, M. F., & Pantic, M. (2007, October). Combined support vector machines and hidden markov models for modeling facial action temporal dynamics. In M. Lew, N. Sebe, T. S. Huang, E. M. Bakker (Eds.), Human\u2013Computer Interaction: IEEE International Workshop, HCI 2007, Rio de Janeiro, Brazil (LNCS 4796, pp. 118-127).","DOI":"10.1007\/978-3-540-75773-3_13"},{"key":"jse.2010101605-107","doi-asserted-by":"publisher","DOI":"10.1037\/1528-3542.7.3.487"},{"key":"jse.2010101605-108","doi-asserted-by":"publisher","DOI":"10.1111\/j.1460-9568.2005.04073.x"},{"key":"jse.2010101605-109","doi-asserted-by":"crossref","unstructured":"Villalba, S. D., Castellano, G., & Camurri, A. (2007, September). Recognising human emotions from body movement and gesture dynamics. In Proceedings of the 2nd International Conference on Affective Computing and Intelligent Interaction (ACII 2007), Lisbon, Portugal (pp. 71-82).","DOI":"10.1007\/978-3-540-74889-2_7"},{"key":"jse.2010101605-110","doi-asserted-by":"crossref","unstructured":"Vinciarelli, A., Pantic, M., & Bourlard, H. (2009, December). Social signal processing: Survey of an emerging domain. Image and Vision Computing Journal, 27(12), 1743-1759.Viola, P., & Jones, M. (2001, December). Rapid object detection using a boosted cascade of simple features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Kauai, HI (Vol. 1, pp. 511-518).","DOI":"10.1016\/j.imavis.2008.11.007"},{"key":"jse.2010101605-111","doi-asserted-by":"crossref","unstructured":"Vogt, T., Andr\u00e9, E., & Bee, N. (2008, June). EmoVoice\u2014a framework for online recognition of emotions from voice. In Perception in Multimodal Dialogue Systems: 4th IEEE Tutorial and Research Workshop on Perception and Interactive Technologies for Speech-Based Systems (PIT 2008), Kloster Irsee, Germany (LNCS 5078, pp. 188-199).","DOI":"10.1007\/978-3-540-69369-7"},{"key":"jse.2010101605-112","unstructured":"Wagner, J., Kim, J., & Andre, E. (2005, July). From physiological signals to emotions: Implementing and comparing selected methods for feature extraction and classification. In Proceedings of the IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands (pp. 940-943). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-113","doi-asserted-by":"publisher","DOI":"10.1037\/0033-2909.121.3.437"},{"key":"jse.2010101605-114","doi-asserted-by":"crossref","unstructured":"Whissell, C. M. (1989). The dictionary of affect in language. In R. Plutchik & H. Kellerman (Ed.), Emotion: Theory, research and experience. The measurement of emotions (Vol. 4, pp. 113-131). New York: Academic Press.","DOI":"10.1016\/B978-0-12-558704-4.50011-6"},{"key":"jse.2010101605-115","doi-asserted-by":"publisher","DOI":"10.1080\/02699939208411073"},{"key":"jse.2010101605-116","doi-asserted-by":"crossref","unstructured":"Wollmer, M., Eyben, F., Reiter, S., Schuller, B., Cox, C., Douglas-Cowie, E., et al. (2008, September). Abandoning emotion classes - towards continuous emotion recognition with modelling of long-range dependencies. In Proceedings of Interspeech, Brisbane, Australia (pp. 597-600).","DOI":"10.21437\/Interspeech.2008-192"},{"key":"jse.2010101605-117","doi-asserted-by":"publisher","DOI":"10.1109\/6046.807953"},{"key":"jse.2010101605-118","doi-asserted-by":"crossref","unstructured":"Xu, M., Jin, J. S., Luo, S., & Duan, L. (2008, October). Hierarchical movie affective content analysis based on arousal and valence features. In Proceedings of ACM Multimedia, Vancouver, British Columbia, Canada (pp. 677-680). ACM Publishing.","DOI":"10.1145\/1459359.1459457"},{"key":"jse.2010101605-119","doi-asserted-by":"crossref","unstructured":"Yang, Y.-H., Lin, Y.-C., Su, Y.-F., & Chen, H. H. (2007, July). Music emotion classification: A regression approach. In Proceedings of the IEEE International Conference on Multimedia and Expo, Beijing, China (pp. 208-211). Washington, DC: IEEE Computer Society.","DOI":"10.1109\/ICME.2007.4284623"},{"issue":"4","key":"jse.2010101605-120","first-page":"1","article-title":"Object tracking: A survey.","volume":"38","author":"A.Yilmaz","year":"2006","journal-title":"ACM Journal of Computing Surveys"},{"key":"jse.2010101605-121","doi-asserted-by":"crossref","unstructured":"Yu, C., Aoki, P. M., & Woodruff, A. (2004, October). Detecting user engagement in everyday conversations. In Proceedings of 8th International Conference on Spoken Language Processing, Jeju Island, Korea (pp. 1329-1332).","DOI":"10.21437\/Interspeech.2004-327"},{"key":"jse.2010101605-122","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2008.52"},{"key":"jse.2010101605-123","unstructured":"Zhang, S., Tian, Q., Jiang, S., Huang, Q., & Gao, W. (2008, June). Affective MTV analysis based on arousal and valence features. In Proceedings of the IEEE International Conference on Multimedia and Expo, Hannover, Germany (pp. 1369-1372). Washington, DC: IEEE Computer Society."},{"key":"jse.2010101605-124","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-6494.1979.tb00217.x"}],"container-title":["International Journal of Synthetic Emotions"],"original-title":[],"language":"ng","link":[{"URL":"https:\/\/www.igi-global.com\/viewtitle.aspx?TitleId=39005","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,3,25]],"date-time":"2024-03-25T21:55:10Z","timestamp":1711403710000},"score":1,"resource":{"primary":{"URL":"https:\/\/services.igi-global.com\/resolvedoi\/resolve.aspx?doi=10.4018\/jse.2010101605"}},"subtitle":[""],"short-title":[],"issued":{"date-parts":[[2010,1,1]]},"references-count":125,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2010,1]]}},"URL":"http:\/\/dx.doi.org\/10.4018\/jse.2010101605","relation":{},"ISSN":["1947-9093","1947-9107"],"issn-type":[{"value":"1947-9093","type":"print"},{"value":"1947-9107","type":"electronic"}],"subject":[],"published":{"date-parts":[[2010,1,1]]}}}