default search action
Louis-Philippe Morency
Person information
- affiliation: Carnegie Mellon University
SPARQL queries
🛈 Please note that only 70% of the records listed on this page have a DOI. Therefore, DOI-based queries can only provide partial results.
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j37]Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Foundations & Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions. ACM Comput. Surv. 56(10): 264 (2024) - [c271]Alex Wilf, Sihyun Shawn Lee, Paul Pu Liang, Louis-Philippe Morency:
Think Twice: Perspective-Taking Improves Large Language Models' Theory-of-Mind Capabilities. ACL (1) 2024: 8292-8308 - [c270]Haofei Yu, Zhengyang Qi, Lawrence Jang, Russ Salakhutdinov, Louis-Philippe Morency, Paul Pu Liang:
MMoE: Enhancing Multimodal Models with Mixtures of Multimodal Interaction Experts. EMNLP 2024: 10006-10030 - [c269]Dong Won Lee, Hae Park, Yoon Kim, Cynthia Breazeal, Louis-Philippe Morency:
Global Reward to Local Rewards: Multimodal-Guided Decomposition for Improving Dialogue Agents. EMNLP 2024: 15737-15762 - [c268]Leena Mathur, Paul Pu Liang, Louis-Philippe Morency:
Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions. EMNLP 2024: 20541-20560 - [c267]Paul Pu Liang, Chun Kai Ling, Yun Cheng, Alexander Obolenskiy, Yudong Liu, Rohan Pandey, Alex Wilf, Louis-Philippe Morency, Russ Salakhutdinov:
Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications. ICLR 2024 - [c266]Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, Maarten Sap:
SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents. ICLR 2024 - [c265]Ryo Ishii, Shin'ichiro Eitoku, Shohei Matsuo, Motohiro Makiguchi, Ayami Hoshi, Louis-Philippe Morency:
Let's Dance Together! AI Dancers Can Dance to Your Favorite Music and Style. ICMI Companion 2024: 88-90 - [i114]Victoria Lin, Eli Ben-Michael, Louis-Philippe Morency:
Optimizing Language Models for Human Preferences is a Causal Inference Problem. CoRR abs/2402.14979 (2024) - [i113]Dong Won Lee, Hae Won Park, Yoon Kim, Cynthia Breazeal, Louis-Philippe Morency:
Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback. CoRR abs/2403.11330 (2024) - [i112]Leena Mathur, Paul Pu Liang, Louis-Philippe Morency:
Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions. CoRR abs/2404.11023 (2024) - [i111]Paul Pu Liang, Akshay Goindani, Talha Chafekar, Leena Mathur, Haofei Yu, Ruslan Salakhutdinov, Louis-Philippe Morency:
HEMM: Holistic Evaluation of Multimodal Foundation Models. CoRR abs/2407.03418 (2024) - [i110]Shentong Mo, Russ Salakhutdinov, Louis-Philippe Morency, Paul Pu Liang:
IoT-LM: Large Multisensory Language Models for the Internet of Things. CoRR abs/2407.09801 (2024) - 2023
- [j36]Paul Pu Liang, Yiwei Lyu, Xiang Fan, Arav Agarwal, Yun Cheng, Louis-Philippe Morency, Ruslan Salakhutdinov:
MultiZoo and MultiBench: A Standardized Toolkit for Multimodal Deep Learning. J. Mach. Learn. Res. 24: 234:1-234:7 (2023) - [j35]Paul Pu Liang, Yiwei Lyu, Xiang Fan, Jeffrey Tsaw, Yudong Liu, Shentong Mo, Dani Yogatama, Louis-Philippe Morency, Russ Salakhutdinov:
High-Modality Multimodal Transformer: Quantifying Modality & Interaction Heterogeneity for High-Modality Representation Learning. Trans. Mach. Learn. Res. 2023 (2023) - [j34]Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, Cèsar Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodolà, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan J. Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, François Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocon, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse H. Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, José Hernández-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, María José Ramírez-Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael I. Ivanitskiy, Michael Starritt, Michael Strube, Michal Swedrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T., Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Milkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima (Shammie) Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay V. Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, Ziyi Wu:
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. Trans. Mach. Learn. Res. 2023 (2023) - [c264]Himanshu Thakur, Atishay Jain, Praneetha Vaddamanu, Paul Pu Liang, Louis-Philippe Morency:
Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions. ACL (2) 2023: 340-351 - [c263]Victoria Lin, Louis-Philippe Morency:
SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations. ACL (Findings) 2023: 4312-4331 - [c262]Rohan Pandey, Rulin Shao, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment. ACL (1) 2023: 5444-5455 - [c261]Xiang Fan, Yiwei Lyu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control. ACL (Findings) 2023: 11970-11992 - [c260]Paul Pu Liang, Yiwei Lyu, Gunjan Chhablani, Nihal Jain, Zihao Deng, Xingbo Wang, Louis-Philippe Morency, Ruslan Salakhutdinov:
MultiViz: Towards User-Centric Visualizations and Interpretations of Multimodal Models. CHI Extended Abstracts 2023: 214:1-214:21 - [c259]Lingjing Kong, Martin Q. Ma, Guangyi Chen, Eric P. Xing, Yuejie Chi, Louis-Philippe Morency, Kun Zhang:
Understanding Masked Autoencoders via Hierarchical Latent Variable Models. CVPR 2023: 7918-7928 - [c258]Victoria Lin, Louis-Philippe Morency, Dimitrios Dimitriadis, Srinagesh Sharma:
Counterfactual Augmentation for Multimodal Learning Under Presentation Bias. EMNLP (Findings) 2023: 592-606 - [c257]Victoria Lin, Louis-Philippe Morency, Eli Ben-Michael:
Text-Transport: Toward Learning Causal Effects of Natural Language. EMNLP 2023: 1288-1304 - [c256]Alex Wilf, Syeda Nahida Akter, Leena Mathur, Paul Pu Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, Louis-Philippe Morency:
Difference-Masking: Choosing What to Mask in Continued Pretraining. EMNLP (Findings) 2023: 13222-13234 - [c255]Maneesh Bilalpur, Saurabh Hinduja, Laura A. Cariola, Lisa B. Sheeber, Nick Alien, László A. Jeni, Louis-Philippe Morency, Jeffrey F. Cohn:
Multimodal Feature Selection for Detecting Mothers' Depression in Dyadic Interactions with their Adolescent Offspring. FG 2023: 1-8 - [c254]Alex Wilf, Martin Q. Ma, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Face-to-Face Contrastive Learning for Social Intelligence Question-Answering. FG 2023: 1-7 - [c253]Dong Won Lee, Chaitanya Ahuja, Paul Pu Liang, Sanika Natu, Louis-Philippe Morency:
Lecture Presentations Multimodal Dataset: Towards Understanding Multimodality in Educational Videos. ICCV 2023: 20030-20041 - [c252]Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, Louis-Philippe Morency:
Continual Learning for Personalized Co-Speech Gesture Generation. ICCV 2023: 20836-20846 - [c251]Paul Pu Liang, Yiwei Lyu, Gunjan Chhablani, Nihal Jain, Zihao Deng, Xingbo Wang, Louis-Philippe Morency, Ruslan Salakhutdinov:
MultiViz: Towards Visualizing and Understanding Multimodal Models. ICLR 2023 - [c250]Paul Pu Liang, Louis-Philippe Morency:
Tutorial on Multimodal Machine Learning: Principles, Challenges, and Open Questions. ICMI Companion 2023: 101-104 - [c249]Leena Mathur, Maja J. Mataric, Louis-Philippe Morency:
Expanding the Role of Affective Phenomena in Multimodal Interaction Research. ICMI 2023: 253-260 - [c248]Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency:
Multimodal Fusion Interactions: A Study of Human and Automatic Quantification. ICMI 2023: 425-435 - [c247]Torsten Wörtwein, Nicholas B. Allen, Lisa B. Sheeber, Randy P. Auerbach, Jeffrey F. Cohn, Louis-Philippe Morency:
Neural Mixed Effects for Nonlinear Personalized Predictions. ICMI 2023: 445-454 - [c246]Alexandria K. Vail, Jeffrey M. Girard, Lauren M. Bylsma, Jay Fournier, Holly A. Swartz, Jeffrey F. Cohn, Louis-Philippe Morency:
Representation Learning for Interpersonal and Multimodal Behavior Dynamics: A Multiview Extension of Latent Change Score Models. ICMI 2023: 517-526 - [c245]Maneesh Bilalpur, Saurabh Hinduja, Laura A. Cariola, Lisa Sheeber, Nicholas B. Allen, Louis-Philippe Morency, Jeffrey F. Cohn:
SHAP-based Prediction of Mother's History of Depression to Understand the Influence on Child Behavior. ICMI 2023: 537-544 - [c244]Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard J. Chen, Zihao Deng, Nicholas B. Allen, Randy Auerbach, Faisal Mahmood, Russ Salakhutdinov, Louis-Philippe Morency:
Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework. NeurIPS 2023 - [c243]Paul Pu Liang, Zihao Deng, Martin Q. Ma, James Y. Zou, Louis-Philippe Morency, Ruslan Salakhutdinov:
Factorized Contrastive Learning: Going Beyond Multi-view Redundancy. NeurIPS 2023 - [e12]Elisabeth André, Mohamed Chetouani, Dominique Vaufreydaz, Gale M. Lucas, Tanja Schultz, Louis-Philippe Morency, Alessandro Vinciarelli:
Proceedings of the 25th International Conference on Multimodal Interaction, ICMI 2023, Paris, France, October 9-13, 2023. ACM 2023 [contents] - [e11]Elisabeth André, Mohamed Chetouani, Dominique Vaufreydaz, Gale M. Lucas, Tanja Schultz, Louis-Philippe Morency, Alessandro Vinciarelli:
International Conference on Multimodal Interaction, ICMI 2023, Companion Volume, Paris, France, October 9-13, 2023. ACM 2023 [contents] - [i109]Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard J. Chen, Zihao Deng, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency:
Quantifying & Modeling Feature Interactions: An Information Decomposition Framework. CoRR abs/2302.12247 (2023) - [i108]Leena Mathur, Maja J. Mataric, Louis-Philippe Morency:
Expanding the Role of Affective Phenomena in Multimodal Interaction Research. CoRR abs/2305.10827 (2023) - [i107]Victoria Lin, Louis-Philippe Morency, Dimitrios Dimitriadis, Srinagesh Sharma:
Counterfactual Augmentation for Multimodal Learning Under Presentation Bias. CoRR abs/2305.14083 (2023) - [i106]Alex Wilf, Syeda Nahida Akter, Leena Mathur, Paul Pu Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, Louis-Philippe Morency:
Difference-Masking: Choosing What to Mask in Continued Pretraining. CoRR abs/2305.14577 (2023) - [i105]Victoria Lin, Louis-Philippe Morency:
SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations. CoRR abs/2305.14728 (2023) - [i104]Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency:
Multimodal Fusion Interactions: A Study of Human and Automatic Quantification. CoRR abs/2306.04125 (2023) - [i103]Paul Pu Liang, Chun Kai Ling, Yun Cheng, Alex Obolenskiy, Yudong Liu, Rohan Pandey, Alex Wilf, Louis-Philippe Morency, Ruslan Salakhutdinov:
Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications. CoRR abs/2306.04539 (2023) - [i102]Himanshu Thakur, Atishay Jain, Praneetha Vaddamanu, Paul Pu Liang, Louis-Philippe Morency:
Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions. CoRR abs/2306.04597 (2023) - [i101]Lingjing Kong, Martin Q. Ma, Guangyi Chen, Eric P. Xing, Yuejie Chi, Louis-Philippe Morency, Kun Zhang:
Understanding Masked Autoencoders via Hierarchical Latent Variable Models. CoRR abs/2306.04898 (2023) - [i100]Paul Pu Liang, Zihao Deng, Martin Ma, James Zou, Louis-Philippe Morency, Ruslan Salakhutdinov:
Factorized Contrastive Learning: Going Beyond Multi-view Redundancy. CoRR abs/2306.05268 (2023) - [i99]Torsten Wörtwein, Nicholas B. Allen, Lisa B. Sheeber, Randy P. Auerbach, Jeffrey F. Cohn, Louis-Philippe Morency:
Neural Mixed Effects for Nonlinear Personalized Predictions. CoRR abs/2306.08149 (2023) - [i98]Paul Pu Liang, Yiwei Lyu, Xiang Fan, Arav Agarwal, Yun Cheng, Louis-Philippe Morency, Ruslan Salakhutdinov:
MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep Learning. CoRR abs/2306.16413 (2023) - [i97]Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, Maarten Sap:
SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents. CoRR abs/2310.11667 (2023) - [i96]Victoria Lin, Louis-Philippe Morency, Eli Ben-Michael:
Text-Transport: Toward Learning Causal Effects of Natural Language. CoRR abs/2310.20697 (2023) - [i95]Alex Wilf, Alex Tianyi Xu, Paul Pu Liang, Alexander Obolenskiy, Daniel Fried, Louis-Philippe Morency:
Comparative Knowledge Distillation. CoRR abs/2311.02253 (2023) - [i94]Shentong Mo, Paul Pu Liang, Russ Salakhutdinov, Louis-Philippe Morency:
MultiIoT: Towards Large-scale Multisensory Learning for the Internet of Things. CoRR abs/2311.06217 (2023) - [i93]Haofei Yu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
MMOE: Mixture of Multimodal Interaction Experts. CoRR abs/2311.09580 (2023) - [i92]Alex Wilf, Sihyun Shawn Lee, Paul Pu Liang, Louis-Philippe Morency:
Think Twice: Perspective-Taking Improves Large Language Models' Theory-of-Mind Capabilities. CoRR abs/2311.10227 (2023) - 2022
- [c242]Laura A. Cariola, Saurabh Hinduja, Maneesh Bilalpur, Lisa B. Sheeber, Nicholas B. Allen, Louis-Philippe Morency, Jeffrey F. Cohn:
Language Use in Mother-Adolescent Dyadic Interaction: Preliminary Results. ACII 2022: 1-8 - [c241]Volkan Cirik, Louis-Philippe Morency, Taylor Berg-Kirkpatrick:
HOLM: Hallucinating Objects with Language Models for Referring Expression Recognition in Partially-Observed Scenes. ACL (1) 2022: 5440-5453 - [c240]Yiwei Lyu, Paul Pu Liang, Zihao Deng, Ruslan Salakhutdinov, Louis-Philippe Morency:
DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations. AIES 2022: 455-467 - [c239]Chaitanya Ahuja, Dong Won Lee, Louis-Philippe Morency:
Low-Resource Adaptation for Personalized Co-Speech Gesture Generation. CVPR 2022: 20534-20544 - [c238]Samuel Yu, Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
PACS: A Dataset for Physical Audiovisual CommonSense Reasoning. ECCV (37) 2022: 292-309 - [c237]Torsten Wörtwein, Lisa Sheeber, Nicholas B. Allen, Jeffrey F. Cohn, Louis-Philippe Morency:
Beyond Additive Fusion: Learning Non-Additive Multimodal Interactions. EMNLP (Findings) 2022: 4681-4696 - [c236]Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency:
Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis. EMNLP (Findings) 2022: 7273-7284 - [c235]Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Weakly-supervised Contrastive Representations. ICLR 2022 - [c234]Yao-Hung Hubert Tsai, Tianqin Li, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov:
Conditional Contrastive Learning with Kernel. ICLR 2022 - [c233]Louis-Philippe Morency:
What is Multimodal? ICMI 2022: 1 - [c232]Alexandria K. Vail, Jeffrey M. Girard, Lauren M. Bylsma, Jeffrey F. Cohn, Jay Fournier, Holly Swartz, Louis-Philippe Morency:
Toward Causal Understanding of Therapist-Client Relationships: A Study of Language Modality and Social Entrainment. ICMI 2022: 487-494 - [c231]Cheng-Fu Yang, Yao-Hung Hubert Tsai, Wan-Cyuan Fan, Russ Salakhutdinov, Louis-Philippe Morency, Frank Wang:
Paraphrasing Is All You Need for Novel Object Captioning. NeurIPS 2022 - [i91]Yao-Hung Hubert Tsai, Tianqin Li, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov:
Conditional Contrastive Learning with Kernel. CoRR abs/2202.05458 (2022) - [i90]Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Weakly-Supervised Contrastive Representations. CoRR abs/2202.06670 (2022) - [i89]Paul Pu Liang, Yiwei Lyu, Xiang Fan, Shentong Mo, Dani Yogatama, Louis-Philippe Morency, Ruslan Salakhutdinov:
HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning. CoRR abs/2203.01311 (2022) - [i88]Yiwei Lyu, Paul Pu Liang, Zihao Deng, Ruslan Salakhutdinov, Louis-Philippe Morency:
DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations. CoRR abs/2203.02013 (2022) - [i87]Samuel Yu, Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
PACS: A Dataset for Physical Audiovisual CommonSense Reasoning. CoRR abs/2203.11130 (2022) - [i86]Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, Cèsar Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodolà, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan J. Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, François Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocon, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse H. Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, José Hernández-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, María José Ramírez-Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael I. Ivanitskiy, Michael Starritt, Michael Strube, Michal Swedrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T., Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Milkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima (Shammie) Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay V. Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, Ziyi Wu:
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. CoRR abs/2206.04615 (2022) - [i85]Paul Pu Liang, Yiwei Lyu, Gunjan Chhablani, Nihal Jain, Zihao Deng, Xingbo Wang, Louis-Philippe Morency, Ruslan Salakhutdinov:
MultiViz: An Analysis Benchmark for Visualizing and Understanding Multimodal Models. CoRR abs/2207.00056 (2022) - [i84]Alex Wilf, Qianli M. Ma, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Face-to-Face Contrastive Learning for Social Intelligence Question-Answering. CoRR abs/2208.01036 (2022) - [i83]Dong Won Lee, Chaitanya Ahuja, Paul Pu Liang, Sanika Natu, Louis-Philippe Morency:
Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides. CoRR abs/2208.08080 (2022) - [i82]Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions. CoRR abs/2209.03430 (2022) - [i81]Cheng-Fu Yang, Yao-Hung Hubert Tsai, Wan-Cyuan Fan, Ruslan Salakhutdinov, Louis-Philippe Morency, Yu-Chiang Frank Wang:
Paraphrasing Is All You Need for Novel Object Captioning. CoRR abs/2209.12343 (2022) - [i80]Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency:
Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis. CoRR abs/2210.04714 (2022) - [i79]Xiang Fan, Yiwei Lyu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control. CoRR abs/2211.05750 (2022) - [i78]Aneesha Sampath, Victoria Lin, Louis-Philippe Morency:
SeedBERT: Recovering Annotator Rating Distributions from an Aggregated Label. CoRR abs/2211.13196 (2022) - [i77]Rohan Pandey, Rulin Shao, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment. CoRR abs/2212.10549 (2022) - 2021
- [c230]Md. Kamrul Hasan, Sangwu Lee, Wasifur Rahman, Amir Zadeh, Rada Mihalcea, Louis-Philippe Morency, Ehsan Hoque:
Humor Knowledge Enriched Transformer for Understanding Multimodal Humor. AAAI 2021: 12972-12980 - [c229]Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas B. Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data. ACL/IJCNLP (1) 2021: 4170-4187 - [c228]Peter Wu, Paul Pu Liang, Jiatong Shi, Ruslan Salakhutdinov, Shinji Watanabe, Louis-Philippe Morency:
Understanding the Tradeoffs in Client-side Privacy for Downstream Speech Tasks. APSIPA ASC 2021: 841-848 - [c227]Alexandria K. Vail, Jeffrey M. Girard, Lauren M. Bylsma, Jeffrey F. Cohn, Jay Fournier, Holly Swartz, Louis-Philippe Morency:
Goals, Tasks, and Bonds: Toward the Computational Assessment of Therapist Versus Client Perception of Working Alliance. FG 2021: 1-8 - [c226]Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, Louis-Philippe Morency:
Self-supervised Learning from a Multi-view Perspective. ICLR 2021 - [c225]Yao-Hung Hubert Tsai, Martin Q. Ma, Muqiao Yang, Han Zhao, Louis-Philippe Morency, Ruslan Salakhutdinov:
Self-supervised Representation Learning with Relative Predictive Coding. ICLR 2021 - [c224]Wei Han, Hui Chen, Alexander F. Gelbukh, Amir Zadeh, Louis-Philippe Morency, Soujanya Poria:
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis. ICMI 2021: 6-15 - [c223]Dong Won Lee, Chaitanya Ahuja, Louis-Philippe Morency:
Crossmodal Clustered Contrastive Learning: Grounding of Spoken Language to Gesture. ICMI Companion 2021: 202-210 - [c222]Torsten Wörtwein, Lisa B. Sheeber, Nicholas B. Allen, Jeffrey F. Cohn, Louis-Philippe Morency:
Human-Guided Modality Informativeness for Affective States. ICMI 2021: 728-734 - [c221]Dushyant Singh Chauhan, Gopendra Vikram Singh, Navonil Majumder, Amir Zadeh, Asif Ekbal, Pushpak Bhattacharyya, Louis-Philippe Morency, Soujanya Poria:
M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations. ICMI 2021: 773-777 - [c220]Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov:
Towards Understanding and Mitigating Social Biases in Language Models. ICML 2021: 6565-6576 - [c219]Ryo Ishii, Xutong Ren, Michal Muszynski, Louis-Philippe Morency:
Multimodal and Multitask Approach to Listener's Backchannel Prediction: Can Prediction of Turn-changing and Turn-management Willingness Improve Backchannel Modeling? IVA 2021: 131-138 - [c218]Paul Pu Liang, Peter Wu, Ziyin Liu, Louis-Philippe Morency, Ruslan Salakhutdinov:
Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment. ACM Multimedia 2021: 2680-2689 - [c217]Hayley Hung, Cathal Gurrin, Martha A. Larson, Hatice Gunes, Fabien Ringeval, Elisabeth André, Louis-Philippe Morency:
Social Signals and Multimedia: Past, Present, Future. ACM Multimedia 2021: 4610-4612 - [c216]Jianing Yang, Yongxin Wang, Ruitao Yi, Yuying Zhu, Azaan Rehman, Amir Zadeh, Soujanya Poria, Louis-Philippe Morency:
MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences. NAACL-HLT 2021: 1009-1021 - [c215]Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard H. Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency:
StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer. NAACL-HLT 2021: 2116-2138 - [c214]Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency:
MultiBench: Multiscale Benchmarks for Multimodal Representation Learning. NeurIPS Datasets and Benchmarks 2021 - [i76]Amir Zadeh, Santiago Benoit, Louis-Philippe Morency:
StarNet: Gradient-free Training of Deep Generative Models using Determined System of Linear Equations. CoRR abs/2101.00574 (2021) - [i75]Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency:
Understanding the Tradeoffs in Client-Side Privacy for Speech Recognition. CoRR abs/2101.08919 (2021) - [i74]Yao-Hung Hubert Tsai, Martin Q. Ma, Muqiao Yang, Han Zhao, Louis-Philippe Morency, Ruslan Salakhutdinov:
Self-supervised Representation Learning with Relative Predictive Coding. CoRR abs/2103.11275 (2021) - [i73]Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard H. Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency:
StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer. CoRR abs/2104.05196 (2021) - [i72]Yao-Hung Hubert Tsai, Shaojie Bai, Louis-Philippe Morency, Ruslan Salakhutdinov:
A Note on Connecting Barlow Twins with Negative-Sample-Free Contrastive Learning. CoRR abs/2104.13712 (2021) - [i71]Yao-Hung Hubert Tsai, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov:
Conditional Contrastive Learning: Removing Undesirable Information in Self-Supervised Representations. CoRR abs/2106.02866 (2021) - [i70]Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency:
Integrating Auxiliary Information in Self-supervised Learning. CoRR abs/2106.02869 (2021) - [i69]Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas B. Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data. CoRR abs/2106.13213 (2021) - [i68]Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov:
Towards Understanding and Mitigating Social Biases in Language Models. CoRR abs/2106.13219 (2021) - [i67]Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency:
MultiBench: Multiscale Benchmarks for Multimodal Representation Learning. CoRR abs/2107.07502 (2021) - [i66]Wei Han, Hui Chen, Alexander F. Gelbukh, Amir Zadeh, Louis-Philippe Morency, Soujanya Poria:
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis. CoRR abs/2107.13669 (2021) - [i65]Dushyant Singh Chauhan, Gopendra Vikram Singh, Navonil Majumder, Amir Zadeh, Asif Ekbal, Pushpak Bhattacharyya, Louis-Philippe Morency, Soujanya Poria:
M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations. CoRR abs/2108.01260 (2021) - [i64]Amir Zadeh, Santiago Benoit, Louis-Philippe Morency:
Relay Variational Inference: A Method for Accelerated Encoderless VI. CoRR abs/2110.13422 (2021) - 2020
- [j33]Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency:
Foundations of Multimodal Co-learning. Inf. Fusion 64: 188-193 (2020) - [c213]Victoria Lin, Jeffrey M. Girard, Louis-Philippe Morency:
Context-Dependent Models for Predicting and Characterizing Facial Expressiveness. AffCon@AAAI 2020: 11-28 - [c212]Wasifur Rahman, Md. Kamrul Hasan, Sangwu Lee, AmirAli Bagher Zadeh, Chengfeng Mao, Louis-Philippe Morency, Mohammed E. Hoque:
Integrating Multimodal Information in Large Pretrained Transformers. ACL 2020: 2359-2369 - [c211]Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency:
Towards Debiasing Sentence Representations. ACL 2020: 5502-5515 - [c210]Tian Jin, Zhun Liu, Shengjia Yan, Alexandre E. Eichenberger, Louis-Philippe Morency:
Language to Network: Conditional Parameter Adaptation with Natural Language Descriptions. ACL 2020: 6994-7007 - [c209]Volkan Cirik, Taylor Berg-Kirkpatrick, Louis-Philippe Morency:
Refer360$^\circ$: A Referring Expression Recognition Dataset in 360$^\circ$ Images. ACL 2020: 7189-7202 - [c208]Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur:
On Emergent Communication in Competitive Multi-Agent Teams. AAMAS 2020: 735-743 - [c207]Chaitanya Ahuja, Dong Won Lee, Yukiko I. Nakano, Louis-Philippe Morency:
Style Transfer for Co-speech Gesture Animation: A Multi-speaker Conditional-Mixture Approach. ECCV (18) 2020: 248-265 - [c206]Seong Hyeon Park, Gyubok Lee, Jimin Seo, Manoj Bhat, Minseok Kang, Jonathan Francis, Ashwin R. Jadhav, Paul Pu Liang, Louis-Philippe Morency:
Diverse and Admissible Trajectory Forecasting Through Multimodal Context Understanding. ECCV (11) 2020: 282-298 - [c205]AmirAli Bagher Zadeh, Yansheng Cao, Smon Hessner, Paul Pu Liang, Soujanya Poria, Louis-Philippe Morency:
CMU-MOSEAS: A Multimodal Language Dataset for Spanish, Portuguese, German and French. EMNLP (1) 2020: 1801-1812 - [c204]Yao-Hung Hubert Tsai, Martin Ma, Muqiao Yang, Ruslan Salakhutdinov, Louis-Philippe Morency:
Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis. EMNLP (1) 2020: 1823-1833 - [c203]Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, Louis-Philippe Morency:
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures. EMNLP (Findings) 2020: 1884-1895 - [c202]Torsten Wörtwein, Louis-Philippe Morency:
Simple and Effective Approaches for Uncertainty Prediction in Facial Action Unit Intensity Regression. FG 2020: 452-456 - [c201]Michal Muszynski, Jamie Zelazny, Jeffrey M. Girard, Louis-Philippe Morency:
Depression Severity Assessment for Adolescents at High Risk of Mental Disorders. ICMI 2020: 70-78 - [c200]Victoria Lin, Jeffrey M. Girard, Michael A. Sayette, Louis-Philippe Morency:
Toward Multimodal Modeling of Emotional Expressiveness. ICMI 2020: 548-557 - [c199]Ryo Ishii, Xutong Ren, Michal Muszynski, Louis-Philippe Morency:
Can Prediction of Turn-management Willingness Improve Turn-changing Modeling? IVA 2020: 28:1-28:8 - [c198]Ryo Ishii, Chaitanya Ahuja, Yukiko I. Nakano, Louis-Philippe Morency:
Impact of Personality on Nonverbal Behavior Generation. IVA 2020: 29:1-29:8 - [c197]Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov:
Neural Methods for Point-wise Dependency Estimation. NeurIPS 2020 - [i63]Paul Pu Liang, Terrance Liu, Ziyin Liu, Ruslan Salakhutdinov, Louis-Philippe Morency:
Think Locally, Act Globally: Federated Learning with Local and Global Representations. CoRR abs/2001.01523 (2020) - [i62]Ziyin Liu, Blair Chen, Ru Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda:
Learning Not to Learn in the Presence of Noisy Labels. CoRR abs/2002.06541 (2020) - [i61]Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur:
On Emergent Communication in Competitive Multi-Agent Teams. CoRR abs/2003.01848 (2020) - [i60]Seong Hyeon Park, Gyubok Lee, Manoj Bhat, Jimin Seo, Minseok Kang, Jonathan Francis, Ashwin R. Jadhav, Paul Pu Liang, Louis-Philippe Morency:
Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding. CoRR abs/2003.03212 (2020) - [i59]Yao-Hung Hubert Tsai, Martin Q. Ma, Muqiao Yang, Ruslan Salakhutdinov, Louis-Philippe Morency:
Interpretable Multimodal Routing for Human Multimodal Language. CoRR abs/2004.14198 (2020) - [i58]Navonil Majumder, Rishabh Bhardwaj, Soujanya Poria, Amir Zadeh, Alexander F. Gelbukh, Amir Hussain, Louis-Philippe Morency:
Improving Aspect-Level Sentiment Analysis with Aspect Extraction. CoRR abs/2005.06607 (2020) - [i57]Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov:
Neural Methods for Point-wise Dependency Estimation. CoRR abs/2006.05553 (2020) - [i56]Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, Louis-Philippe Morency:
Demystifying Self-Supervised Learning: An Information-Theoretical Framework. CoRR abs/2006.05576 (2020) - [i55]Jianing Yang, Yuying Zhu, Yongxin Wang, Ruitao Yi, Amir Zadeh, Louis-Philippe Morency:
What Gives the Answer Away? Question Answering Bias Analysis on Video QA Datasets. CoRR abs/2007.03626 (2020) - [i54]Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency:
Towards Debiasing Sentence Representations. CoRR abs/2007.08100 (2020) - [i53]Chaitanya Ahuja, Dong Won Lee, Yukiko I. Nakano, Louis-Philippe Morency:
Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach. CoRR abs/2007.12553 (2020) - [i52]Victoria Lin, Jeffrey M. Girard, Michael A. Sayette, Louis-Philippe Morency:
Toward Multimodal Modeling of Emotional Expressiveness. CoRR abs/2009.00001 (2020) - [i51]Jianing Yang, Yongxin Wang, Ruitao Yi, Yuying Zhu, Azaan Rehman, Amir Zadeh, Soujanya Poria, Louis-Philippe Morency:
MTGAT: Multimodal Temporal Graph Attention Networks for Unaligned Human Multimodal Language Sequences. CoRR abs/2010.11985 (2020) - [i50]Shangda Li, Devendra Singh Chaplot, Yao-Hung Hubert Tsai, Yue Wu, Louis-Philippe Morency, Ruslan Salakhutdinov:
Unsupervised Domain Adaptation for Visual Navigation. CoRR abs/2010.14543 (2020) - [i49]Terrance Liu, Paul Pu Liang, Michal Muszynski, Ryo Ishii, David Brent, Randy Auerbach, Nicholas B. Allen, Louis-Philippe Morency:
Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study. CoRR abs/2012.02359 (2020) - [i48]Paul Pu Liang, Peter Wu, Ziyin Liu, Louis-Philippe Morency, Ruslan Salakhutdinov:
Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment. CoRR abs/2012.02813 (2020)
2010 – 2019
- 2019
- [j32]Mihai Burzo, Verónica Pérez-Rosas, Daniel McDuff, Louis-Philippe Morency, Alexis Narvaez, Rada Mihalcea:
Sensing Affective Response to Visual Narratives. IEEE Comput. Intell. Mag. 14(2): 54-66 (2019) - [j31]Qinglan Wei, Elif Bozkurt, Louis-Philippe Morency, Bo Sun:
Spontaneous smile intensity estimation by fusing saliency maps and convolutional neural networks. J. Electronic Imaging 28(2): 023031 (2019) - [j30]Iacopo Masi, Feng-Ju Chang, Jongmoo Choi, Shai Harel, Jungyeon Kim, KangGeon Kim, Jatuporn Toy Leksut, Stephen Rawls, Yue Wu, Tal Hassner, Wael AbdAlmageed, Gérard G. Medioni, Louis-Philippe Morency, Prem Natarajan, Ram Nevatia:
Learning Pose-Aware Models for Pose-Invariant Face Recognition in the Wild. IEEE Trans. Pattern Anal. Mach. Intell. 41(2): 379-393 (2019) - [j29]Tadas Baltrusaitis, Chaitanya Ahuja, Louis-Philippe Morency:
Multimodal Machine Learning: A Survey and Taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 41(2): 423-443 (2019) - [c196]Chaitanya Ahuja, Louis-Philippe Morency:
Language2Pose: Natural Language Grounded Pose Forecasting. 3DV 2019: 719-728 - [c195]Hai Pham, Paul Pu Liang, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos:
Found in Translation: Learning Robust Joint Representations by Cyclic Translations between Modalities. AAAI 2019: 6892-6899 - [c194]Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors. AAAI 2019: 7216-7223 - [c193]Jeffrey M. Girard, Gayatri Shandar, Zhun Liu, Jeffrey F. Cohn, Lijun Yin, Louis-Philippe Morency:
Reconsidering the Duchenne Smile: Indicator of Positive Emotion or Artifact of Smile Intensity? ACII 2019: 594-599 - [c192]Paul Pu Liang, Zhun Liu, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization. ACL (1) 2019: 1569-1576 - [c191]Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov:
Multimodal Transformer for Unaligned Multimodal Language Sequences. ACL (1) 2019: 6558-6569 - [c190]Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, Louis-Philippe Morency:
Social-IQ: A Question Answering Benchmark for Artificial Social Intelligence. CVPR 2019: 8807-8817 - [c189]Yao-Hung Hubert Tsai, Santosh Kumar Divvala, Louis-Philippe Morency, Ruslan Salakhutdinov, Ali Farhadi:
Video Relationship Reasoning Using Gated Spatio-Temporal Energy Graph. CVPR 2019: 10424-10433 - [c188]Md. Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md. Iftekhar Tanveer, Louis-Philippe Morency, Mohammed (Ehsan) Hoque:
UR-FUNNY: A Multimodal Language Dataset for Understanding Humor. EMNLP/IJCNLP (1) 2019: 2046-2056 - [c187]Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov:
Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel. EMNLP/IJCNLP (1) 2019: 4343-4352 - [c186]Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov:
Learning Factorized Multimodal Representations. ICLR (Poster) 2019 - [c185]Chaitanya Ahuja, Shugao Ma, Louis-Philippe Morency, Yaser Sheikh:
To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations. ICMI 2019: 74-84 - [c184]Kaixin Ma, Xinyu Wang, Xinru Yang, Mingtong Zhang, Jeffrey M. Girard, Louis-Philippe Morency:
ElderReact: A Multimodal Dataset for Recognizing Emotional Response in Aging Adults. ICMI 2019: 349-357 - [c183]Ankit Parag Shah, Vasu Sharma, Vaibhav Vaibhav, Mahmoud Alismail, Louis-Philippe Morency:
Multimodal Behavioral Markers Exploring Suicidal Intent in Social Media Videos. ICMI 2019: 409-413 - [c182]Wenchao Du, Louis-Philippe Morency, Jeffrey F. Cohn, Alan W. Black:
Bag-of-Acoustic-Words for Mental Health Assessment: A Deep Autoencoding Approach. INTERSPEECH 2019: 1428-1432 - [c181]Shih-Fu Chang, Louis-Philippe Morency, Alexander G. Hauptmann, Alberto Del Bimbo, Cathal Gurrin, Hayley Hung, Heng Ji, Alan F. Smeaton:
PANEL: Challenges for Multimedia/Multimodal Research in the Next Decade. ACM Multimedia 2019: 2234-2235 - [c180]Paul Pu Liang, Yao Chong Lim, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency:
Strong and Simple Baselines for Multimodal Utterance Embeddings. NAACL-HLT (1) 2019: 2599-2609 - [c179]Ziyin Liu, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda:
Deep Gamblers: Learning to Abstain with Portfolio Theory. NeurIPS 2019: 10622-10632 - [i47]Vasu Sharma, Ankita Kalra, Louis-Philippe Morency:
Induced Attention Invariance: Defending VQA Models against Adversarial Attacks. ViGIL@NeurIPS 2019 - [i46]Amir Zadeh, Yao Chong Lim, Paul Pu Liang, Louis-Philippe Morency:
Variational Auto-Decoder. CoRR abs/1903.00840 (2019) - [i45]Yao-Hung Hubert Tsai, Santosh Kumar Divvala, Louis-Philippe Morency, Ruslan Salakhutdinov, Ali Farhadi:
Video Relationship Reasoning using Gated Spatio-Temporal Energy Graph. CoRR abs/1903.10547 (2019) - [i44]Md. Kamrul Hasan, Wasifur Rahman, Amir Zadeh, Jianyuan Zhong, Md. Iftekhar Tanveer, Louis-Philippe Morency, Mohammed E. Hoque:
UR-FUNNY: A Multimodal Language Dataset for Understanding Humor. CoRR abs/1904.06618 (2019) - [i43]Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov:
Multimodal Transformer for Unaligned Multimodal Language Sequences. CoRR abs/1906.00295 (2019) - [i42]Paul Pu Liang, Yao Chong Lim, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency:
Strong and Simple Baselines for Multimodal Utterance Embeddings. CoRR abs/1906.02125 (2019) - [i41]Ziyin Liu, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda:
Deep Gamblers: Learning to Abstain with Portfolio Theory. CoRR abs/1907.00208 (2019) - [i40]Paul Pu Liang, Zhun Liu, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization. CoRR abs/1907.01011 (2019) - [i39]Chaitanya Ahuja, Louis-Philippe Morency:
Language2Pose: Natural Language Grounded Pose Forecasting. CoRR abs/1907.01108 (2019) - [i38]Shih-Fu Chang, Alexander G. Hauptmann, Louis-Philippe Morency, Sameer K. Antani, Dick C. A. Bulterman, Carlos Busso, Joyce Yue Chai, Julia Hirschberg, Ramesh C. Jain, Ketan Mayer-Patel, Reuven Meth, Raymond J. Mooney, Klara Nahrstedt, Shrikanth S. Narayanan, Prem Natarajan, Sharon L. Oviatt, Balakrishnan Prabhakaran, Arnold W. M. Smeulders, Hari Sundaram, Zhengyou Zhang, Michelle X. Zhou:
Report of 2017 NSF Workshop on Multimedia Challenges, Opportunities and Research Roadmaps. CoRR abs/1908.02308 (2019) - [i37]Wasifur Rahman, Md. Kamrul Hasan, Amir Zadeh, Louis-Philippe Morency, Mohammed Ehsan Hoque:
M-BERT: Injecting Multimodal Information in the BERT Structure. CoRR abs/1908.05787 (2019) - [i36]Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov:
Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel. CoRR abs/1908.11775 (2019) - [i35]Chaitanya Ahuja, Shugao Ma, Louis-Philippe Morency, Yaser Sheikh:
To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations. CoRR abs/1910.02181 (2019) - [i34]Amir Zadeh, Tianjun Ma, Soujanya Poria, Louis-Philippe Morency:
WildMix Dataset and Spectro-Temporal Transformer Model for Monoaural Audio Source Separation. CoRR abs/1911.09783 (2019) - [i33]Amir Zadeh, Chengfeng Mao, Kelly Shi, Yiwei Zhang, Paul Pu Liang, Soujanya Poria, Louis-Philippe Morency:
Factorized Multimodal Transformer for Multimodal Sequential Learning. CoRR abs/1911.09826 (2019) - [i32]Victoria Lin, Jeffrey M. Girard, Louis-Philippe Morency:
Context-Dependent Models for Predicting and Characterizing Facial Expressiveness. CoRR abs/1912.04523 (2019) - [i31]Amir Zadeh, Smon Hessner, Yao Chong Lim, Louis-Philippe Morency:
Pseudo-Encoded Stochastic Variational Inference. CoRR abs/1912.09423 (2019) - 2018
- [j28]Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter Robinson, Andreas Bulling:
GazeDirector: Fully Articulated Eye Gaze Redirection in Video. Comput. Graph. Forum 37(2): 217-225 (2018) - [c178]Chaitanya Ahuja, Louis-Philippe Morency:
Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling. AAAI 2018: 4996-5003 - [c177]Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, Louis-Philippe Morency:
Memory Fusion Network for Multi-view Sequential Learning. AAAI 2018: 5634-5641 - [c176]Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, Louis-Philippe Morency:
Multi-attention Recurrent Network for Human Communication Comprehension. AAAI 2018: 5642-5649 - [c175]Volkan Cirik, Taylor Berg-Kirkpatrick, Louis-Philippe Morency:
Using Syntax to Ground Referring Expressions in Natural Images. AAAI 2018: 6756-6764 - [c174]Amir Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, Louis-Philippe Morency:
Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph. ACL (1) 2018: 2236-2246 - [c173]Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Efficient Low-rank Multimodal Fusion With Modality-Specific Factors. ACL (1) 2018: 2247-2256 - [c172]Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency:
Multimodal Language Analysis with Recurrent Multistage Fusion. EMNLP 2018: 150-161 - [c171]Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, Louis-Philippe Morency:
OpenFace 2.0: Facial Behavior Analysis Toolkit. FG 2018: 59-66 - [c170]Liandong Li, Tadas Baltrusaitis, Bo Sun, Louis-Philippe Morency:
Edge Convolutional Network for Facial Action Intensity Estimation. FG 2018: 171-178 - [c169]Naomi Eigbe, Tadas Baltrusaitis, Louis-Philippe Morency, John Pestian:
Toward Visual Behavior Markers of Suicidal Ideation. FG 2018: 530-534 - [c168]Alexandria K. Vail, Elizabeth S. Liebson, Justin T. Baker, Louis-Philippe Morency:
Toward Objective, Multifaceted Characterization of Psychotic Disorders: Lexical, Structural, and Disfluency Markers of Spoken Language. ICMI 2018: 170-178 - [c167]Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Multimodal Local-Global Ranking Fusion for Emotion Recognition. ICMI 2018: 472-476 - [c166]Yulun Du, Alan W. Black, Louis-Philippe Morency, Maxine Eskénazi:
Multimodal Polynomial Fusion for Detecting Driver Distraction. INTERSPEECH 2018: 611-615 - [c165]Volkan Cirik, Louis-Philippe Morency, Taylor Berg-Kirkpatrick:
Visual Referring Expression Recognition: What Do Systems Actually Learn? NAACL-HLT (2) 2018: 781-787 - [c164]Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, Roger Zimmermann:
Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos. NAACL-HLT 2018: 2122-2132 - [c163]Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell:
Speaker-Follower Models for Vision-and-Language Navigation. NeurIPS 2018: 3318-3329 - [c162]Liang-Yan Gui, Liangke Gui, Yu-Xiong Wang, Louis-Philippe Morency, José M. F. Moura:
Factorized Convolutional Networks: Unsupervised Fine-Tuning for Image Clustering. WACV 2018: 1205-1214 - [p2]Tadas Baltrusaitis, Chaitanya Ahuja, Louis-Philippe Morency:
Challenges and applications in multimodal machine learning. The Handbook of Multimodal-Multisensor Interfaces, Volume 2 (2) 2018: 17-48 - [p1]Samy Bengio, Li Deng, Louis-Philippe Morency, Björn W. Schuller:
Perspectives on predictive power of multimodal deep learning: surprises and future directions. The Handbook of Multimodal-Multisensor Interfaces, Volume 2 (2) 2018: 455-472 - [i30]Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, Louis-Philippe Morency:
Multi-attention Recurrent Network for Human Communication Comprehension. CoRR abs/1802.00923 (2018) - [i29]Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency:
Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning. CoRR abs/1802.00924 (2018) - [i28]Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, Louis-Philippe Morency:
Memory Fusion Network for Multi-view Sequential Learning. CoRR abs/1802.00927 (2018) - [i27]Volkan Cirik, Taylor Berg-Kirkpatrick, Louis-Philippe Morency:
Using Syntax to Ground Referring Expressions in Natural Images. CoRR abs/1805.10547 (2018) - [i26]Volkan Cirik, Louis-Philippe Morency, Taylor Berg-Kirkpatrick:
Visual Referring Expression Recognition: What Do Systems Actually Learn? CoRR abs/1805.11818 (2018) - [i25]Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Efficient Low-rank Multimodal Fusion with Modality-Specific Factors. CoRR abs/1806.00064 (2018) - [i24]Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell:
Speaker-Follower Models for Vision-and-Language Navigation. CoRR abs/1806.02724 (2018) - [i23]Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov:
Learning Factorized Multimodal Representations. CoRR abs/1806.06176 (2018) - [i22]Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency:
Multimodal Language Analysis with Recurrent Multistage Fusion. CoRR abs/1808.03920 (2018) - [i21]Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Multimodal Local-Global Ranking Fusion for Emotion Recognition. CoRR abs/1809.04931 (2018) - [i20]Yulun Du, Chirag Raman, Alan W. Black, Louis-Philippe Morency, Maxine Eskénazi:
Multimodal Polynomial Fusion for Detecting Driver Distraction. CoRR abs/1810.10565 (2018) - [i19]Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency:
Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors. CoRR abs/1811.09362 (2018) - [i18]Hai Pham, Paul Pu Liang, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos:
Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities. CoRR abs/1812.07809 (2018) - 2017
- [j27]Gale M. Lucas, Albert A. Rizzo, Jonathan Gratch, Stefan Scherer, Giota Stratou, Jill Boberg, Louis-Philippe Morency:
Reporting Mental Health Symptoms: Breaking Down Barriers to Care with Virtual Human Interviewers. Frontiers Robotics AI 4: 51 (2017) - [j26]Giota Stratou, Louis-Philippe Morency:
MultiSense - Context-Aware Nonverbal Behavior Analysis Framework: A Psychological Distress Use Case. IEEE Trans. Affect. Comput. 8(2): 190-203 (2017) - [j25]Verena Venek, Stefan Scherer, Louis-Philippe Morency, Albert Skip Rizzo, John Pestian:
Adolescent Suicidal Risk Assessment in Clinician-Patient Interaction. IEEE Trans. Affect. Comput. 8(2): 204-215 (2017) - [c161]Ting-Yao Hu, Chirag Raman, Salvador Medina Maza, Liangke Gui, Tadas Baltrusaitis, Robert E. Frederking, Louis-Philippe Morency, Alan W. Black, Maxine Eskénazi:
Integrating Verbal and Nonvebval Input into a Dynamic Response Spoken Dialogue System. AAAI 2017: 5091-5092 - [c160]Tadas Baltrusaitis, Liandong Li, Louis-Philippe Morency:
Local-global ranking for facial expression intensity estimation. ACII 2017: 111-118 - [c159]Behnaz Nojavanasghari, Charles E. Hughes, Tadas Baltrusaitis, Louis-Philippe Morency:
Hand2Face: Automatic synthesis and recognition of hand over face occlusions. ACII 2017: 209-215 - [c158]Alexandria Katarina Vail, Tadas Baltrusaitis, Luciana Pennant, Elizabeth S. Liebson, Justin T. Baker, Louis-Philippe Morency:
Visual attention in schizophrenia: Eye contact and gaze aversion during clinical interactions. ACII 2017: 490-497 - [c157]Louis-Philippe Morency, Tadas Baltrusaitis:
Multimodal Machine Learning: Integrating Language, Vision and Speech. ACL (Tutorial Abstracts) 2017: 3-5 - [c156]Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, Stefan Scherer:
Affect-LM: A Neural Language Model for Customizable Affective Text Generation. ACL (1) 2017: 634-642 - [c155]Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, Louis-Philippe Morency:
Context-Dependent Sentiment Analysis in User-Generated Videos. ACL (1) 2017: 873-883 - [c154]Edmund Tong, Amir Zadeh, Cara Jones, Louis-Philippe Morency:
Combating Human Trafficking with Multimodal Deep Models. ACL (1) 2017: 1547-1556 - [c153]Behnaz Nojavanasghari, Charles E. Hughes, Louis-Philippe Morency:
Exceptionally Social: Design of an Avatar-Mediated Interactive System for Promoting Social Skills in Children with Autism. CHI Extended Abstracts 2017: 1932-1939 - [c152]Wenjie Pei, Tadas Baltrusaitis, David M. J. Tax, Louis-Philippe Morency:
Temporal Attention-Gated Model for Robust Sequence Classification. CVPR 2017: 820-829 - [c151]Amir Zadeh, Tadas Baltrusaitis, Louis-Philippe Morency:
Convolutional Experts Constrained Local Model for Facial Landmark Detection. CVPR Workshops 2017: 2051-2059 - [c150]Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, Louis-Philippe Morency:
Tensor Fusion Network for Multimodal Sentiment Analysis. EMNLP 2017: 1103-1114 - [c149]Liangke Gui, Tadas Baltrusaitis, Louis-Philippe Morency:
Curriculum Learning for Facial Expression Recognition. FG 2017: 505-511 - [c148]KangGeon Kim, Feng-Ju Chang, Jongmoo Choi, Louis-Philippe Morency, Ramakant Nevatia, Gérard G. Medioni:
Local-Global Landmark Confidences for Face Recognition. FG 2017: 666-672 - [c147]Christy Yuan Li, Tadas Baltrusaitis, Louis-Philippe Morency:
Constrained Ensemble Initialization for Facial Landmark Tracking in Video. FG 2017: 697-704 - [c146]Eugene Laksana, Tadas Baltrusaitis, Louis-Philippe Morency, John P. Pestian:
Investigating Facial Behavior Indicators of Suicidal Ideation. FG 2017: 770-777 - [c145]Amir Zadeh, Yao Chong Lim, Tadas Baltrusaitis, Louis-Philippe Morency:
Convolutional Experts Constrained Local Model for 3D Facial Landmark Detection. ICCV Workshops 2017: 2519-2528 - [c144]Liandong Li, Tadas Baltrusaitis, Bo Sun, Louis-Philippe Morency:
Combining Sequential Geometry and Texture Features for Distinguishing Genuine and Deceptive Emotions. ICCV Workshops 2017: 3147-3153 - [c143]Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, Louis-Philippe Morency:
Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis. ICDM 2017: 1033-1038 - [c142]Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, Eric P. Xing:
Select-additive learning: Improving generalization in multimodal sentiment analysis. ICME 2017: 949-954 - [c141]Abdelwahab Bourai, Tadas Baltrusaitis, Louis-Philippe Morency:
Automatically predicting human knowledgeability through non-verbal cues. ICMI 2017: 60-67 - [c140]Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency:
Multimodal sentiment analysis with word-level fusion and reinforcement learning. ICMI 2017: 163-171 - [c139]Torsten Wörtwein, Tadas Baltrusaitis, Eugene Laksana, Luciana Pennant, Elizabeth S. Liebson, Dost Öngür, Justin T. Baker, Louis-Philippe Morency:
Computational Analysis of Acoustic Descriptors in Psychotic Patients. INTERSPEECH 2017: 3256-3260 - [c138]Hongliang Yu, Liangke Gui, Michael A. Madaio, Amy Ogan, Justine Cassell, Louis-Philippe Morency:
Temporally Selective Attention Model for Social and Affective State Recognition in Multimedia Content. ACM Multimedia 2017: 1743-1751 - [i17]Rita Singh, Justin T. Baker, Luciana Pennant, Louis-Philippe Morency:
Deducing the severity of psychiatric symptoms from the human voice. CoRR abs/1703.05344 (2017) - [i16]Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, Stefan Scherer:
Affect-LM: A Neural Language Model for Customizable Affective Text Generation. CoRR abs/1704.06851 (2017) - [i15]Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter Robinson, Andreas Bulling:
GazeDirector: Fully Articulated Eye Gaze Redirection in Video. CoRR abs/1704.08763 (2017) - [i14]Edmund Tong, Amir Zadeh, Cara Jones, Louis-Philippe Morency:
Combating Human Trafficking with Deep Multimodal Models. CoRR abs/1705.02735 (2017) - [i13]Tadas Baltrusaitis, Chaitanya Ahuja, Louis-Philippe Morency:
Multimodal Machine Learning: A Survey and Taxonomy. CoRR abs/1705.09406 (2017) - [i12]Abhilasha Ravichander, Shruti Rijhwani, Rajat Kulshreshtha, Chirag Nagpal, Tadas Baltrusaitis, Louis-Philippe Morency:
Preserving Intermediate Objectives: One Simple Trick to Improve Learning for Hierarchical Models. CoRR abs/1706.07867 (2017) - [i11]Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, Louis-Philippe Morency:
Tensor Fusion Network for Multimodal Sentiment Analysis. CoRR abs/1707.07250 (2017) - [i10]Behnaz Nojavanasghari, Charles E. Hughes, Tadas Baltrusaitis, Louis-Philippe Morency:
Hand2Face: Automatic Synthesis and Recognition of Hand Over Face Occlusions. CoRR abs/1708.00370 (2017) - [i9]Chaitanya Ahuja, Louis-Philippe Morency:
Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling. CoRR abs/1710.02254 (2017) - 2016
- [j24]Amir Zadeh, Rowan Zellers, Eli Pincus, Louis-Philippe Morency:
Multimodal Sentiment Intensity Analysis in Videos: Facial Gestures and Verbal Messages. IEEE Intell. Syst. 31(6): 82-88 (2016) - [j23]Stefan Scherer, Gale M. Lucas, Jonathan Gratch, Albert Skip Rizzo, Louis-Philippe Morency:
Self-Reported Symptoms of Depression and PTSD Are Associated with Reduced Vowel Space in Screening Interviews. IEEE Trans. Affect. Comput. 7(1): 59-73 (2016) - [j22]Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, Louis-Philippe Morency:
Multimodal Analysis and Prediction of Persuasiveness in Online Social Multimedia. ACM Trans. Interact. Intell. Syst. 6(3): 25:1-25:25 (2016) - [c137]KangGeon Kim, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency, Gérard G. Medioni:
Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection. BMVC 2016 - [c136]Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter Robinson, Andreas Bulling:
A 3D Morphable Eye Region Model for Gaze Estimation. ECCV (1) 2016: 297-313 - [c135]Shyam Sundar Rajagopalan, Louis-Philippe Morency, Tadas Baltrusaitis, Roland Goecke:
Extending Long Short-Term Memory for Multi-View Structured Learning. ECCV (7) 2016: 338-353 - [c134]Lujie Chen, Xin Li, Zhuyun Xia, Zhanmei Song, Louis-Philippe Morency, Artur Dubrawski:
Riding an emotional roller-coaster: A multimodal study of young child's math problem solving activities. EDM 2016: 38-45 - [c133]Hongliang Yu, Shikun Zhang, Louis-Philippe Morency:
Unsupervised Text Recap Extraction for TV Series. EMNLP 2016: 1797-1806 - [c132]Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter Robinson, Andreas Bulling:
Learning an appearance-based gaze estimator from one million synthesised images. ETRA 2016: 131-138 - [c131]Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter Robinson, Andreas Bulling:
A 3D Morphable Model of the Eye Region. Eurographics (Posters) 2016: 35-36 - [c130]Sayan Ghosh, Eugene Laksana, Louis-Philippe Morency, Stefan Scherer:
An unsupervised approach to glottal inverse filtering. EUSIPCO 2016: 220-224 - [c129]Behnaz Nojavanasghari, Tadas Baltrusaitis, Charles E. Hughes, Louis-Philippe Morency:
EmoReact: a multimodal approach and dataset for recognizing emotional responses in children. ICMI 2016: 137-144 - [c128]Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltrusaitis, Louis-Philippe Morency:
Deep multimodal fusion for persuasiveness prediction. ICMI 2016: 284-288 - [c127]Sayan Ghosh, Eugene Laksana, Louis-Philippe Morency, Stefan Scherer:
Representation Learning for Speech Emotion Recognition. INTERSPEECH 2016: 3603-3607 - [c126]Melissa Roemmele, Soja-Marie Morgens, Andrew S. Gordon, Louis-Philippe Morency:
Recognizing Human Actions in the Motion Trajectories of Shapes. IUI 2016: 271-281 - [c125]Mathieu Chollet, Nithin Chandrashekhar, Ari Shapiro, Louis-Philippe Morency, Stefan Scherer:
Manipulating the Perception of Virtual Audiences Using Crowdsourced Behaviors. IVA 2016: 164-174 - [c124]Mathieu Chollet, Torsten Wörtwein, Louis-Philippe Morency, Stefan Scherer:
A Multimodal Corpus for the Assessment of Public Speaking Ability and Anxiety. LREC 2016 - [c123]Albert A. Rizzo, Gale M. Lucas, Jonathan Gratch, Giota Stratou, Louis-Philippe Morency, Kenneth Chavez, Russ Shilling, Stefan Scherer:
Automatic Behavior Analysis During a Clinical Interview with a Virtual Human. MMVR 2016: 316-322 - [c122]Louis-Philippe Morency:
Keynote - Modeling Human Communication Dynamics. SIGDIAL Conference 2016: 263 - [c121]Tadas Baltrusaitis, Peter Robinson, Louis-Philippe Morency:
OpenFace: An open source facial behavior analysis toolkit. WACV 2016: 1-10 - [c120]Behnaz Nojavanasghari, Tadas Baltrusaitis, Charles E. Hughes, Louis-Philippe Morency:
The Future Belongs to the Curious: Towards Automatic Understanding and Recognition of Curiosity in Children. WOCCI 2016: 16-22 - [e10]Yukiko I. Nakano, Elisabeth André, Toyoaki Nishida, Louis-Philippe Morency, Carlos Busso, Catherine Pelachaud:
Proceedings of the 18th ACM International Conference on Multimodal Interaction, ICMI 2016, Tokyo, Japan, November 12-16, 2016. ACM 2016, ISBN 978-1-4503-4556-9 [contents] - [i8]Jason J. Corso, Alexandre Alahi, Kristen Grauman, Gregory D. Hager, Louis-Philippe Morency, Harpreet S. Sawhney, Yaser Sheikh:
Video Analysis for Body-worn Cameras in Law Enforcement. CoRR abs/1604.03130 (2016) - [i7]Amir Zadeh, Rowan Zellers, Eli Pincus, Louis-Philippe Morency:
MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos. CoRR abs/1606.06259 (2016) - [i6]Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, Eric P. Xing:
Select-Additive Learning: Improving Cross-individual Generalization in Multimodal Sentiment Analysis. CoRR abs/1609.05244 (2016) - [i5]Volkan Cirik, Eduard H. Hovy, Louis-Philippe Morency:
Visualizing and Understanding Curriculum Learning for Long Short-Term Memory Networks. CoRR abs/1611.06204 (2016) - [i4]Amir Zadeh, Tadas Baltrusaitis, Louis-Philippe Morency:
Deep Constrained Local Models for Facial Landmark Detection. CoRR abs/1611.08657 (2016) - [i3]Wenjie Pei, Tadas Baltrusaitis, David M. J. Tax, Louis-Philippe Morency:
Temporal Attention-Gated Model for Robust Sequence Classification. CoRR abs/1612.00385 (2016) - 2015
- [j21]Francis Gaudreault, Louis-Philippe Morency, Rafael J. Najmanovich:
NRGsuite: a PyMOL plugin to perform docking simulations in real time using FlexAID. Bioinform. 31(23): 3856-3858 (2015) - [j20]Giota Stratou, Stefan Scherer, Jonathan Gratch, Louis-Philippe Morency:
Automatic nonverbal behavior indicators of depression and PTSD: the effect of gender. J. Multimodal User Interfaces 9(1): 17-29 (2015) - [j19]Konstantinos Bousmalis, Stefanos Zafeiriou, Louis-Philippe Morency, Maja Pantic, Zoubin Ghahramani:
Variational Infinite Hidden Conditional Random Fields. IEEE Trans. Pattern Anal. Mach. Intell. 37(9): 1917-1929 (2015) - [j18]Friedhelm Schwenker, Stefan Scherer, Louis-Philippe Morency:
Preface of pattern recognition in human computer interaction. Pattern Recognit. Lett. 66: 1-3 (2015) - [j17]Sunghyun Park, Stefan Scherer, Jonathan Gratch, Peter J. Carnevale, Louis-Philippe Morency:
I Can Already Guess Your Answer: Predicting Respondent Reactions during Dyadic Negotiation. IEEE Trans. Affect. Comput. 6(2): 86-96 (2015) - [c119]Louis-Philippe Morency, Giota Stratou, David DeVault, Arno Hartholt, Margot Lhommet, Gale M. Lucas, Fabrizio Morbini, Kallirroi Georgila, Stefan Scherer, Jonathan Gratch, Stacy Marsella, David R. Traum, Albert A. Rizzo:
SimSensei Demonstration: A Perceptive Virtual Human Interviewer for Healthcare Applications. AAAI 2015: 4307-4308 - [c118]Torsten Wörtwein, Louis-Philippe Morency, Stefan Scherer:
Automatic assessment and analysis of public speaking anxiety: A virtual audience case study. ACII 2015: 187-193 - [c117]Sayan Ghosh, Eugene Laksana, Stefan Scherer, Louis-Philippe Morency:
A multi-label convolutional neural network approach to cross-domain action unit detection. ACII 2015: 609-615 - [c116]Giota Stratou, Louis-Philippe Morency, David DeVault, Arno Hartholt, Edward Fast, Margaux Lhommet, Gale M. Lucas, Fabrizio Morbini, Kallirroi Georgila, Stefan Scherer, Jonathan Gratch, Stacy Marsella, David R. Traum, Albert A. Rizzo:
A demonstration of the perception system in SimSensei, a virtual human application for healthcare interviews. ACII 2015: 787-789 - [c115]Maryam Ziaeefard, Robert Bergevin, Louis-Philippe Morency:
Time-slice Prediction of Dyadic Human Activities. BMVC 2015: 167.1-167.13 - [c114]Jonathan Gratch, Susan G. Hill, Louis-Philippe Morency, David V. Pynadath, David R. Traum:
Exploring the Implications of Virtual Human Research for Human-Robot Teams. HCI (11) 2015: 186-196 - [c113]Mathieu Chollet, Torsten Wörtwein, Louis-Philippe Morency, Ari Shapiro, Stefan Scherer:
Exploring feedback strategies to improve public speaking: an interactive virtual audience framework. UbiComp 2015: 1143-1154 - [c112]Han Suk Shim, Sunghyun Park, Moitreya Chatterjee, Stefan Scherer, Kenji Sagae, Louis-Philippe Morency:
Acoustic and para-verbal indicators of persuasiveness in social multimedia. ICASSP 2015: 2239-2243 - [c111]Stefan Scherer, Louis-Philippe Morency, Jonathan Gratch, John Pestian:
Reduced vowel space is a robust indicator of psychological distress: A cross-corpus analysis. ICASSP 2015: 4789-4793 - [c110]Kim Hartmann, Ingo Siegert, Björn W. Schuller, Louis-Philippe Morency, Albert Ali Salah, Ronald Böck:
ERM4CT 2015: Workshop on Emotion Representations and Modelling for Companion Systems. ERM4CT@ICMI 2015: 1-2 - [c109]Moitreya Chatterjee, Sunghyun Park, Louis-Philippe Morency, Stefan Scherer:
Combining Two Perspectives on Classifying Multimodal Data for Recognizing Speaker Traits. ICMI 2015: 7-14 - [c108]Torsten Wörtwein, Mathieu Chollet, Boris Schauerte, Louis-Philippe Morency, Rainer Stiefelhagen, Stefan Scherer:
Multimodal Public Speaking Performance Assessment. ICMI 2015: 43-50 - [c107]Marcelo Worsley, Stefan Scherer, Louis-Philippe Morency, Paulo Blikstein:
Exploring Behavior Representation for Learning Analytics. ICMI 2015: 251-258 - [c106]Chung-Cheng Chiu, Louis-Philippe Morency, Stacy Marsella:
Predicting Co-verbal Gestures: A Deep and Temporal Modeling Approach. IVA 2015: 152-166 - [e9]Kim Hartmann, Ingo Siegert, Björn W. Schuller, Louis-Philippe Morency, Albert Ali Salah, Ronald Böck:
Proceedings of the International Workshop on Emotion Representations and Modelling for Companion Technologies, ERM4CT@ICMI 2015, Seattle, Washington, USA, November 13, 2015. ACM 2015, ISBN 978-1-4503-3988-9 [contents] - [e8]Friedhelm Schwenker, Stefan Scherer, Louis-Philippe Morency:
Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction - Third IAPR TC3 Workshop, MPRSS 2014, Stockholm, Sweden, August 24, 2014, Revised Selected Papers. Lecture Notes in Computer Science 8869, Springer 2015, ISBN 978-3-319-14898-4 [contents] - [i2]Sayan Ghosh, Eugene Laksana, Louis-Philippe Morency, Stefan Scherer:
Learning Representations of Affect from Speech. CoRR abs/1511.04747 (2015) - 2014
- [j16]Gale M. Lucas, Jonathan Gratch, Aisha King, Louis-Philippe Morency:
It's only a computer: Virtual humans increase willingness to disclose. Comput. Hum. Behav. 37: 94-100 (2014) - [j15]Stefan Scherer, Giota Stratou, Gale M. Lucas, Marwa Mahmoud, Jill Boberg, Jonathan Gratch, Albert A. Rizzo, Louis-Philippe Morency:
Automatic audiovisual behavior descriptors for psychological disorder analysis. Image Vis. Comput. 32(10): 648-658 (2014) - [c105]Moitreya Chatterjee, Sunghyun Park, Han Suk Shim, Kenji Sagae, Louis-Philippe Morency:
Verbal Behaviors and Persuasiveness in Online Multimedia Content. SocialNLP@COLING 2014: 50-58 - [c104]Jonathan Gratch, Gale M. Lucas, Aisha King, Louis-Philippe Morency:
It's only a computer: the impact of human-agent interaction in clinical interviews. AAMAS 2014: 85-92 - [c103]David DeVault, Ron Artstein, Grace Benn, Teresa Dey, Edward Fast, Alesia Gainer, Kallirroi Georgila, Jonathan Gratch, Arno Hartholt, Margaux Lhommet, Gale M. Lucas, Stacy Marsella, Fabrizio Morbini, Angela Nazarian, Stefan Scherer, Giota Stratou, Apar Suri, David R. Traum, Rachel Wood, Yuyu Xu, Albert A. Rizzo, Louis-Philippe Morency:
SimSensei kiosk: a virtual human interviewer for healthcare decision support. AAMAS 2014: 1061-1068 - [c102]Mathieu Chollet, Giota Stratou, Ari Shapiro, Louis-Philippe Morency, Stefan Scherer:
An interactive virtual audience platform for public speaking training. AAMAS 2014: 1657-1658 - [c101]Tadas Baltrusaitis, Peter Robinson, Louis-Philippe Morency:
Continuous Conditional Neural Fields for Structured Regression. ECCV (4) 2014: 593-608 - [c100]Moitreya Chatterjee, Giota Stratou, Stefan Scherer, Louis-Philippe Morency:
Context-based signal descriptors of heart-rate variability for anxiety assessment. ICASSP 2014: 3631-3635 - [c99]Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, Louis-Philippe Morency:
Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach. ICMI 2014: 50-57 - [c98]Stefan Scherer, Zakia Hammal, Ying Yang, Louis-Philippe Morency, Jeffrey F. Cohn:
Dyadic Behavior Analysis in Depression Severity Assessment Interviews. ICMI 2014: 112-119 - [c97]Sayan Ghosh, Moitreya Chatterjee, Louis-Philippe Morency:
A Multimodal Context-based Approach for Distress Assessment. ICMI 2014: 240-246 - [c96]Sunghyun Park, Philippa Shoemark, Louis-Philippe Morency:
Toward crowdsourcing micro-level behavior annotations: the challenges of interface, training, and generalization. IUI 2014: 37-46 - [c95]AmirAli Bagher Zadeh, Kenji Sagae, Louis-Philippe Morency:
Towards Learning Nonverbal Identities from the Web: Automatically Identifying Visually Accentuated Words. IVA 2014: 496-503 - [c94]Jonathan Gratch, Ron Artstein, Gale M. Lucas, Giota Stratou, Stefan Scherer, Angela Nazarian, Rachel Wood, Jill Boberg, David DeVault, Stacy Marsella, David R. Traum, Skip R