default search action
ICMI 2022: Bengaluru, India - Companion Publication
- Raj Tumuluri, Nicu Sebe, Gopal Pingali, Dinesh Babu Jayagopi, Abhinav Dhall, Richa Singh, Lisa Anthony, Albert Ali Salah:
International Conference on Multimodal Interaction, ICMI 2022, Companion Volume, Bengaluru, India, November 7-11, 2022. ACM 2022, ISBN 978-1-4503-9389-8 - Mimi Bocanegra, Mailin Lemke, Roelof Anne Jelle de Vries, Geke D. S. Ludden:
Mattpod: A Design Proposal for a Multi-Sensory Solo Dining Experience. 1-6 - S. Pavankumar Dubagunta, Edoardo Moneta, Eleni Theocharopoulos, Mathew Magimai-Doss:
Towards Automatic Prediction of Non-Expert Perceived Speech Fluency Ratings. 7-11 - Mingkun Xu, Faqiang Liu, Jing Pei:
Endowing Spiking Neural Networks with Homeostatic Adaptivity for APS-DVS Bimodal Scenarios. 12-17 - Lucien Maman, Gualtiero Volpe, Giovanna Varni:
Training Computational Models of Group Processes without Groundtruth: the Self- vs External Assessment's Dilemma. 18-23 - Elif Ecem Özkan, Tom Gurion, Julian Hough, Patrick G. T. Healey, Lorenzo Jamone:
Speaker Motion Patterns during Self-repairs in Natural Dialogue. 24-29 - Everlyne Kimani, Timothy W. Bickmore, Rosalind W. Picard, Matthew S. Goodwin, Holly Jimison:
Real-time Public Speaking Anxiety Prediction Model for Oral Presentations. 30-35 - Soujanya Narayana, Ramanathan Subramanian, Ibrahim Radwan, Roland Goecke:
To Improve Is to Change: Towards Improving Mood Prediction by Learning Changes in Emotion. 36-41 - Pranava Madhyastha:
Towards Integration of Embodiment Features for Prosodic Prominence Prediction from Text. 42-45 - Aaron Chooi, Thileepan Stalin, Plamootil Mathai Aby Raj, Arturo Castillo Ugalde, Yixiao Wang, Elgar Kanhere, Gumawang Hiramandala, Deborah Loh, Pablo Valdivia y Alvarado:
Symbiosis: Design and Development of Novel Soft Robotic Structures for Interactive Public Spaces. 46-51 - Michal Muszynski, Elenor Morgenroth, Laura Vilaclara, Dimitri Van De Ville, Patrik Vuilleumier:
Impact of aesthetic movie highlights on semantics and emotions: a preliminary analysis. 52-60 - Jinal Hitesh Thakkar, Pooja Rao S. B., Kumar Shubham, Vaibhav Jain, Dinesh Babu Jayagopi:
Understanding Interviewees' Perceptions and Behaviour towards Verbally and Non-verbally Expressive Virtual Interviewing Agents. 61-69 - Rozemarijn Hannah Roes, Francisca Pessanha, Almila Akdag Salah:
An Emotional Respiration Speech Dataset. 70-78 - Alice Delbosc, Magalie Ochs, Stéphane Ayache:
Automatic facial expressions, gaze direction and head movements generation of a virtual agent. 79-88 - Isabel Donya Meywirth, Jana Götze:
Can you tell that I'm confused? An overhearer study for German backchannels by an embodied agent. 89-93 - Vladislav Korzun, Anna Beloborodova, Arkady Ilin:
ReCell: replicating recurrent cell for auto-regressive pose generation. 94-97 - Andrew Emerson, Patrick Houghton, Ke Chen, Vinay Basheerabad, Rutuja Ubale, Chee Wee Leong:
Predicting User Confidence in Video Recordings with Spatio-Temporal Multimodal Analytics. 98-104 - George-Petru Ciordas-Hertel, Daniel Biedermann, Marc Winter, Julia Mordel, Hendrik Drachsler:
How can Interaction Data be Contextualized with Mobile Sensing to Enhance Learning Engagement Assessment in Distance Learning? 105-112 - Frédéric Simard, Tomy Aumont, Sayeed A. D. Kizuk, Pascal E. Fortin:
Exploring the Benefits of Spatialized Multimodal Psychophysiological Insights for User Experience Research. 113-120 - Kostas Stoitsas, Itir Önal Ertugrul, Werner Liebregts, Merel M. Jung:
Predicting evaluations of entrepreneurial pitches based on multimodal nonverbal behavioral cues and self-reported characteristics. 121-126 - Soomin Shin, Doo Yon Kim, Christian Wallraven:
Contextual modulation of affect: Comparing humans and deep neural networks. 127-133 - Joshua Y. Kim, Tongliang Liu, Kalina Yacef:
Improving Supervised Learning in Conversational Analysis through Reusing Preprocessing Data as Auxiliary Supervisors. 134-143 - Théo Deschamps-Berger, Lori Lamel, Laurence Devillers:
Investigating Transformer Encoders and Fusion Strategies for Speech Emotion Recognition in Emergency Call Center Conversations. 144-153 - André Groß, Christian Schütze, Britta Wrede, Birte Richter:
An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems. 154-159 - Oliver Roesler, Hardik Kothare, William Burke, Michael Neumann, Jackson Liscombe, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft, Vikram Ramanarayanan:
Exploring Facial Metric Normalization For Within- and Between-Subject Comparisons in a Multimodal Health Monitoring Agent. 160-165 - Christian Schütze, André Groß, Britta Wrede, Birte Richter:
Enabling Non-Technical Domain Experts to Create Robot-Assisted Therapeutic Scenarios via Visual Programming. 166-170 - Vanessa Richter, Michael Neumann, Hardik Kothare, Oliver Roesler, Jackson Liscombe, David Suendermann-Oeft, Sebastian Prokop, Anzalee Khan, Christian Yavorsky, Jean-Pierre Lindenmayer, Vikram Ramanarayanan:
Towards Multimodal Dialog-Based Speech & Facial Biomarkers of Schizophrenia. 171-176 - Valeria Filippou, Nikolas Theodosiou, Mihalis Nicolaou, Elena Constantinou, Georgia G. Panayiotou, Marios Theodorou:
A Wavelet-based Approach for Multimodal Prediction of Alexithymia from Physiological Signals. 177-184 - Denisa Qori McDonald, Casey J. Zampella, Evangelos Sariyanidi, Aashvi Manakiwala, Ellis DeJardin, John D. Herrington, Robert T. Schultz, Birkan Tunç:
Head Movement Patterns during Face-to-Face Conversations Vary with Age. 185-195 - Jing Liu, Mitja Nikolaus, Kübra Bodur, Abdellah Fourtassi:
Predicting Backchannel Signaling in Child-Caregiver Multimodal Conversations. 196-200 - Elena E. Lyakso, Olga V. Frolova, Egor Kleshnev, Nersisson Ruban, A. Mary Mekala, K. V. Arulalan:
Approbation of the Child's Emotional Development Method (CEDM). 201-210
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.