default search action
Kazuhiro Otsuka
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j15]Koya Ito, Yoko Ishii, Ryo Ishii, Shin'ichiro Eitoku, Kazuhiro Otsuka:
Exploring Multimodal Nonverbal Functional Features for Predicting the Subjective Impressions of Interlocutors. IEEE Access 12: 96769-96782 (2024) - 2023
- [c67]Ayane Tashiro, Mai Imamura, Shiro Kumano, Kazuhiro Otsuka:
Analyzing and Recognizing Interlocutors' Gaze Functions from Multimodal Nonverbal Cues. ICMI 2023: 33-41 - [c66]Mai Imamura, Ayane Tashiro, Shiro Kumano, Kazuhiro Otsuka:
Analyzing Synergetic Functional Spectrum from Head Movements and Facial Expressions in Conversations. ICMI 2023: 42-50 - [c65]Shumpei Otsuchi, Koya Ito, Yoko Ishii, Ryo Ishii, Shinichirou Eitoku, Kazuhiro Otsuka:
Identifying Interlocutors' Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic Features. ICMI 2023: 336-344 - 2021
- [c64]Shumpei Otsuchi, Yoko Ishii, Momoko Nakatani, Kazuhiro Otsuka:
Prediction of Interlocutors' Subjective Impressions Based on Functional Head-Movement Features in Group Meetings. ICMI 2021: 352-360 - [c63]Kazuki Takeda, Kazuhiro Otsuka:
Inflation-Deflation Networks for Recognizing Head-Movement Functions in Face-to-Face Conversations. ICMI 2021: 361-369 - [c62]Takashi Mori, Kazuhiro Otsuka:
Deep Transfer Learning for Recognizing Functional Interactions via Head Movements in Multiparty Conversations. ICMI 2021: 370-378 - 2020
- [j14]Kazuhiro Otsuka, Masahiro Tsumori:
Analyzing Multifunctionality of Head Movements in Face-to-Face Conversations Using Deep Convolutional Neural Networks. IEEE Access 8: 217169-217195 (2020)
2010 – 2019
- 2019
- [j13]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation. Multimodal Technol. Interact. 3(4): 70 (2019) - [c61]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Estimating Interpersonal Reactivity Scores Using Gaze Behavior and Dialogue Act During Turn-Changing. HCI (14) 2019: 45-53 - 2018
- [j12]Kazuhiro Otsuka:
Behavioral Analysis of Kinetic Telepresence for Small Symmetric Group-to-Group Meetings. IEEE Trans. Multim. 20(6): 1432-1447 (2018) - [c60]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level. ICMI 2018: 31-39 - [c59]Kazuhiro Otsuka, Keisuke Kasuga, Martina Köhler:
Estimating Visual Focus of Attention in Multiparty Meetings using Deep Convolutional Neural Networks. ICMI 2018: 191-199 - [c58]Daniel Gatica-Perez, Dairazalia Sanchez-Cortes, Trinh Minh Tri Do, Dinesh Babu Jayagopi, Kazuhiro Otsuka:
Vlogging Over Time: Longitudinal Impressions and Behavior in YouTube. MUM 2018: 37-46 - 2017
- [j11]Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, Junji Yamato:
Collective First-Person Vision for Automatic Gaze Analysis in Multiparty Conversations. IEEE Trans. Multim. 19(1): 107-122 (2017) - [c57]Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Computational model of idiosyncratic perception of others' emotions. ACII 2017: 42-49 - [c56]Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Comparing empathy perceived by interlocutors in multiparty conversation and external observers. ACII 2017: 50-57 - [c55]Kazuhiro Otsuka:
MMSpace: Multimodal Meeting Space Embodied by Kinetic Telepresence. CHI Extended Abstracts 2017: 458 - [c54]Shogo Okada, Kazuhiro Otsuka:
Recognizing Words from Gestures: Discovering Gesture Descriptors Associated with Spoken Utterances. FG 2017: 430-437 - [c53]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings. HAI 2017: 181-187 - [c52]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing gaze behavior during turn-taking for estimating empathy skill level. ICMI 2017: 365-373 - 2016
- [j10]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Prediction of Who Will Be the Next Speaker and When Using Gaze Behavior in Multiparty Meetings. ACM Trans. Interact. Intell. Syst. 6(1): 4:1-4:31 (2016) - [j9]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Using Respiration to Predict Who Will Speak Next and When in Multiparty Meetings. ACM Trans. Interact. Intell. Syst. 6(2): 20:1-20:20 (2016) - [c51]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings. ICMI 2016: 209-216 - [c50]Kazuhiro Otsuka:
MMSpace: Kinetically-augmented telepresence for small group-to-group conversations. VR 2016: 19-28 - 2015
- [j8]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Masafumi Matsuda, Junji Yamato:
Analyzing Interpersonal Empathy via Collective Impressions. IEEE Trans. Affect. Comput. 6(4): 324-336 (2015) - [j7]Dairazalia Sanchez-Cortes, Shiro Kumano, Kazuhiro Otsuka, Daniel Gatica-Perez:
In the Mood for Vlog: Multimodal Inference in Conversational Social Video. ACM Trans. Interact. Intell. Syst. 5(2): 9:1-9:24 (2015) - [c49]Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, Junji Yamato:
Automatic gaze analysis in multiparty conversations based on Collective First-Person Vision. FG 2015: 1-8 - [c48]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Predicting next speaker based on head movement in multi-party meetings. ICASSP 2015: 2319-2323 - [c47]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings. ICMI 2015: 99-106 - [c46]Ryo Ishii, Shiro Ozawa, Akira Kojima, Kazuhiro Otsuka, Yuki Hayashi, Yukiko I. Nakano:
Design and Evaluation of Mirror Interface MIOSS to Overlay Remote 3D Spaces. INTERACT (4) 2015: 319-326 - 2014
- [j6]Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Junji Yamato:
Analyzing Perceived Empathy Based on Reaction Time in Behavioral Mimicry. IEICE Trans. Inf. Syst. 97-D(8): 2008-2020 (2014) - [c45]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis and modeling of next speaking start timing based on gaze behavior in multi-party meetings. ICASSP 2014: 694-698 - [c44]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis of Timing Structure of Eye Contact in Turn-changing. GazeIn@ICMI 2014: 15-20 - [c43]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings. ICMI 2014: 18-25 - 2013
- [c42]Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Ryo Ishii, Junji Yamato:
Using a Probabilistic Topic Model to Link Observers' Perception Tendency to Personality. ACII 2013: 588-593 - [c41]Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Junji Yamato:
Analyzing perceived empathy/antipathy based on reaction time in behavioral coordination. FG 2013: 1-8 - [c40]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato:
Predicting next speaker and timing from gaze transition patterns in multi-party meetings. ICMI 2013: 79-86 - [c39]Kazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato:
MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces. ICMI 2013: 389-396 - [c38]Dairazalia Sanchez-Cortes, Joan-Isaac Biel, Shiro Kumano, Junji Yamato, Kazuhiro Otsuka, Daniel Gatica-Perez:
Inferring mood in ubiquitous conversational video. MUM 2013: 22:1-22:9 - 2012
- [j5]Dan Mikami, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Enhancing Memory-Based Particle Filter with Detection-Based Memory Acquisition for Robustness under Severe Occlusion. IEICE Trans. Inf. Syst. 95-D(11): 2693-2703 (2012) - [j4]Takaaki Hori, Shoko Araki, Takuya Yoshioka, Masakiyo Fujimoto, Shinji Watanabe, Takanobu Oba, Atsunori Ogawa, Kazuhiro Otsuka, Dan Mikami, Keisuke Kinoshita, Tomohiro Nakatani, Atsushi Nakamura, Junji Yamato:
Low-Latency Real-Time Meeting Recognition and Understanding Using Distant Microphones and Omni-Directional Camera. IEEE Trans. Speech Audio Process. 20(2): 499-513 (2012) - [c37]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Masafumi Matsuda, Junji Yamato:
Understanding communicative emotions from collective external observations. CHI Extended Abstracts 2012: 2201-2206 - [c36]Kazuhiro Otsuka, Shiro Kumano, Dan Mikami, Masafumi Matsuda, Junji Yamato:
Reconstructing multiparty conversation field by augmenting human head motions via dynamic displays. CHI Extended Abstracts 2012: 2243-2248 - [c35]Dinesh Babu Jayagopi, Dairazalia Sanchez-Cortes, Kazuhiro Otsuka, Junji Yamato, Daniel Gatica-Perez:
Linking speaking and looking behavior patterns with group composition, perception, and performance. ICMI 2012: 433-440 - [c34]Dan Mikami, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Enhancing Memory-based Particle Filter with Detection-based Memory Acquisition for Robustness under Severe Occlusion. VISAPP (2) 2012: 208-215 - 2011
- [j3]Kazuhiro Otsuka:
Conversation Scene Analysis [Social Sciences]. IEEE Signal Process. Mag. 28(4): 127-131 (2011) - [c33]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato:
Analyzing empathetic interactions based on the probabilistic modeling of the co-occurrence patterns of facial expressions in group meetings. FG 2011: 43-50 - [c32]Kazuhiro Otsuka:
Multimodal Conversation Scene Analysis for Understanding People's Communicative Behaviors in Face-to-Face Meetings. HCI (12) 2011: 171-179 - [c31]Kazuhiro Otsuka, Kamil Sebastian Mucha, Shiro Kumano, Dan Mikami, Masafumi Matsuda, Junji Yamato:
A system for reconstructing multiparty conversation field based on augmented head motion by dynamic projection. ACM Multimedia 2011: 763-764 - [c30]Lumei Su, Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato, Yoichi Sato:
Early facial expression recognition with high-frame rate 3D sensing. SMC 2011: 3304-3310 - 2010
- [c29]Dan Mikami, Kazuhiro Otsuka, Junji Yamato:
Memory-Based Particle Filter for Tracking Objects with Large Variation in Pose and Appearance. ECCV (3) 2010: 215-228 - [c28]Sebastian Gorga, Kazuhiro Otsuka:
Conversation scene analysis based on dynamic Bayesian network and image-based gaze detection. ICMI-MLMI 2010: 54:1-54:8 - [c27]Takaaki Hori, Shoko Araki, Takuya Yoshioka, Masakiyo Fujimoto, Shinji Watanabe, Takanobu Oba, Atsunori Ogawa, Kazuhiro Otsuka, Dan Mikami, Keisuke Kinoshita, Tomohiro Nakatani, Atsushi Nakamura, Junji Yamato:
Real-time meeting recognition and understanding using distant microphones and omni-directional camera. SLT 2010: 424-429
2000 – 2009
- 2009
- [j2]Shiro Kumano, Kazuhiro Otsuka, Junji Yamato, Eisaku Maeda, Yoichi Sato:
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates. Int. J. Comput. Vis. 83(2): 178-194 (2009) - [j1]Oscar Mateo Lozano, Kazuhiro Otsuka:
Real-time Visual Tracker by Stream Processing. J. Signal Process. Syst. 57(2): 285-295 (2009) - [c26]Dan Mikami, Kazuhiro Otsuka, Junji Yamato:
Memory-based Particle Filter for face pose tracking robust under complex dynamics. CVPR 2009: 999-1006 - [c25]Kentaro Ishizuka, Shoko Araki, Kazuhiro Otsuka, Tomohiro Nakatani, Masakiyo Fujimoto:
A speaker diarization method based on the probabilistic fusion of audio-visual location information. ICMI 2009: 55-62 - [c24]Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato:
Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings. ICMI 2009: 99-106 - [c23]Kazuhiro Otsuka, Shoko Araki, Dan Mikami, Kentaro Ishizuka, Masakiyo Fujimoto, Junji Yamato:
Realtime meeting analysis and 3D meeting viewer based on omnidirectional multimodal sensors. ICMI 2009: 219-220 - 2008
- [c22]Shiro Kumano, Kazuhiro Otsuka, Junji Yamato, Eisaku Maeda, Yoichi Sato:
Combining Stochastic and Deterministic Search for Pose-Invariant Facial Expression Recognition. BMVC 2008: 1-10 - [c21]Oscar Mateo Lozano, Kazuhiro Otsuka:
Simultaneous and fast 3D tracking of multiple faces in video by GPU-based stream processing. ICASSP 2008: 713-716 - [c20]Kazuhiro Otsuka, Shoko Araki, Kentaro Ishizuka, Masakiyo Fujimoto, Martin Heinrich, Junji Yamato:
A realtime multimodal system for analyzing group meetings by combining face pose tracking and speaker diarization. ICMI 2008: 257-264 - [c19]Kazuhiro Otsuka, Junji Yamato:
Fast and Robust Face Tracking for Analyzing Multiparty Face-to-Face Meetings. MLMI 2008: 14-25 - 2007
- [c18]Shiro Kumano, Kazuhiro Otsuka, Junji Yamato, Eisaku Maeda, Yoichi Sato:
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates. ACCV (1) 2007: 324-334 - [c17]Kazuhiro Otsuka, Hiroshi Sawada, Junji Yamato:
Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances. ICMI 2007: 255-262 - 2006
- [c16]Kazuhiro Otsuka, Junji Yamato, Yoshinao Takemae, Hiroshi Murase:
Quantifying interpersonal influence in face-to-face conversations based on visual attention patterns. CHI Extended Abstracts 2006: 1175-1180 - [c15]Kazuhiro Otsuka, Junji Yamato, Yoshinao Takemae, Hiroshi Murase:
Conversation Scene Analysis with Dynamic Bayesian Network Basedon Visual Head Tracking. ICME 2006: 949-952 - 2005
- [c14]Yoshinao Takemae, Kazuhiro Otsuka, Junji Yamato:
Development of automatic video editing system based on stereo-based head tracking for multiparty conversations. AMT 2005: 269 - [c13]Yoshinao Takemae, Kazuhiro Otsuka, Junji Yamato:
Automatic video editing system using stereo-based head tracking for multiparty conversation. CHI Extended Abstracts 2005: 1817-1820 - [c12]Yoshinao Takemae, Kazuhiro Otsuka, Junji Yamato:
Effects of Automatic Video Editing System Using Stereo-Based Head Tracking for Archiving Meetings. ICME 2005: 185-188 - [c11]Kazuhiro Otsuka, Yoshinao Takemae, Junji Yamato:
A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances. ICMI 2005: 191-198 - [c10]Kazuhiro Otsuka, Yoshinao Takemae, Junji Yamato, Hiroshi Murase:
Probabilistic Inference of Gaze Patterns and Structure of Multiparty Conversations from Head Directions and Utterances. JSAI Workshops 2005: 353-364 - 2004
- [c9]Yoshinao Takemae, Kazuhiro Otsuka, Naoki Mukawa:
Impact of video editing based on participants' gaze in multiparty conversation. CHI Extended Abstracts 2004: 1333-1336 - [c8]Kazuhiro Otsuka, Naoki Mukawa:
Multiview Occlusion Analysis for Tracking Densely Populated Objects Based on 2-D Visual Angles. CVPR (1) 2004: 90-97 - [c7]Kazuhiro Otsuka, Naoki Mukawa:
A Particle Filter for Tracking Densely Populated Objects Based on Explicit Multiview Occlusion Analysis. ICPR (4) 2004: 745-750 - 2003
- [c6]Yoshinao Takemae, Kazuhiro Otsuka, Naoki Mukawa:
Video cut editing rule based on participants' gaze in multiparty conversation. ACM Multimedia 2003: 303-306 - 2000
- [c5]Kazuhiro Otsuka, Tsutomu Horikoshi, Satoshi Suzuki, Haruhiko Kojima:
Memory-Based Forecasting for Weather Image Patterns. AAAI/IAAI 2000: 330-336
1990 – 1999
- 1999
- [c4]Kazuhiro Otsuka, Tsutomu Horikoshi, Satoshi Suzuki, Haruhiko Kojima:
Memory-Based Forecasting of Complex Natural Patterns by Retrieving Similar Image Sequences. ICIAP 1999: 874- - 1998
- [c3]Kazuhiro Otsuka, Tsutomu Horikoshi, Satoshi Suzuki, Masaharu Fujii:
Feature extraction of temporal texture based on spatiotemporal motion trajectory. ICPR 1998: 1047-1051 - [c2]Kazuhiro Otsuka, Tsutomu Horikoshi, Satoshi Suzuki:
Image Sequence Retrieval for Forecasting Weather Radar Echo Pattern. MVA 1998: 238-241 - 1997
- [c1]Kazuhiro Otsuka, Tsutomu Horikoshi, Satoshi Suzuki:
Image velocity estimation from trajectory surface in spatiotemporal space. CVPR 1997: 200-205
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:21 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint