default search action
AVSP 2003: St. Jorioz, France
- Jean-Luc Schwartz, Frédéric Berthommier, Marie-Agnès Cathiard, David Sodoyer:
AVSP 2003 - International Conference on Audio-Visual Speech Processing, St. Jorioz, France, September 4-7, 2003. ISCA 2003
Psycho - Neurophysiology
Invited Paper
- Leonardo Fogassi:
Evolution of language from action understanding. 1-2
Early Interactions
- Riadh Lebib, David Papo, Abdel Douiri, Stella de Bode, Pierre-Marie Baudonniere:
Early processing of visual speech information modulates the subsequent processing of auditory speech input at a pre-attentive level: Evidence from event-related brain potential data. 3-8 - Jeesun Kim, Chris Davis:
Testing the cuing hypothesis for the AV speech detection advantage. 9-12 - Lynne E. Bernstein, Sumiko Takayanagi, Edward T. Auer Jr.:
Enhanced auditory detection with av speech: perceptual evidence for speech and non-speech mechanisms. 13-17 - Jean-Luc Schwartz, Frédéric Berthommier, Christophe Savariaux:
Auditory syllabic identification enhanced by non-informative visible speech. 19-24
AV Synchrony
- Brianna L. Conrey, David B. Pisoni:
Audiovisual asynchrony detection for speech and nonspeech signals. 25-30 - Ken W. Grant, Virginie van Wassenhove, David Poeppel:
Discrimination of auditory-visual synchrony. 31-35 - Virginie van Wassenhove, Ken W. Grant, David Poeppel:
Electrophysiology of auditory-visual speech integration. 37-42
Inter-Individual Variability
- Kaoru Sekiyama, Denis Burnham, Helen Tam, V. Dogu Erdener:
Auditory-visual speech perception development in Japanese and English speakers. 43-47 - Tara Mohammed, Mairéad MacSweeney, Ruth Campbell:
Developing the TAS: Individual differences in silent speechreading, reading and phonological awareness in deaf and hearing speechreaders. 49-54 - Tonya R. Bergeson, David B. Pisoni, Jeffrey T. Reynolds:
Perception of point light displays of speech by normal-hearing adults and deaf adults with cochlear implants. 55-60
Mechanisms
- Marie-Agnès Cathiard, Christian Abry, Séverine Gedzelman, Hélène Loevenbruck:
Visual and auditory perception of epenthetic glides. 61-66 - Jean Vroomen, Mirjam Keetels, Sabine van Linden, Béatrice de Gelder, Paul Bertelson:
Selective adaptation and recalibration of auditory speech by lipread information: Dissipation. 67-70 - Ville Ojanen, Jyrki Tuomainen, Mikko Sams:
Effect of audiovisual primes on identification of auditory target syllables. 71-75
Models
- Jean-Luc Schwartz:
Why the FLMP should not be applied to McGurk data ...or how to better compare models in the Bayesian framework. 77-82 - Dominic W. Massaro:
Model Selection in AVSP: Some old and not so old news. 83-88 - Frédéric Berthommier:
A phonetically neutral model of the low-level audiovisual interaction. 89-94
Analysis and Recognition
Invited Paper
- Gerasimos Potamianos, Chalapathy Neti, Sabine Deligne:
Joint audio-visual speech processing for recognition and enhancement. 95-104
Face Analysis
- Matthias Odisio, Gérard Bailly:
Shape and appearance models of talking faces for model-based tracking. 105-110 - Jesus F. Guitarte Perez, Klaus Lukas, Alejandro F. Frangi:
Low resource lip finding and tracking algorithm for embedded devices. 111-116 - Tomoaki Yoshinaga, Satoshi Tamura, Koji Iwano, Sadaoki Furui:
Audio-visual speech recognition using lip movement extracted from side-face images. 117-120 - Islam Shdaifat, Rolf-Rainer Grigat, Detlev Langmann:
A System for Automatic Lip Reading. 121-126 - Patricia Scanlon, Richard B. Reilly, Philip de Chazal:
Visual feature analysis for automatic speechreading. 127-132
AV Relationships
- Roland Goecke, J. Bruce Millar:
Statistical analysis of the relationship between audio and video speech parameters for Australian English. 133-138 - Laurent Girin:
Pure audio McGurk effect. 139-144 - David Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz:
Further experiments on audio-visual speech source separation. 145-150
Recognition and Dialog
- Rui Ping Shi, Johann Adelhardt, Viktor Zeißler, Anton Batliner, Carmen Frank, Elmar Nöth, Heinrich Niemann:
Using speech and gesture to explore user states in multimodal dialogue systems. 151-156 - Kazuhiro Nakadai, Daisuke Matsuura, Hiroshi G. Okuno, Hiroshi Tsujino:
Improvement of three simultaneous speech recognition by using AV integration and scattering theory for humanoid. 157-162 - Martin Heckmann, Frédéric Berthommier, Christophe Savariaux, Kristian Kroschel:
Effects of image distortions on audio-visual speech recognition. 163-168 - Milos Zelezný, Petr Císar:
Czech audio-visual speech corpus of a car driver for in-vehicle audio-visual speech recognition. 169-173 - Jing Huang, Gerasimos Potamianos, Chalapathy Neti:
Improving audio-visual speech recognition with an infrared headset. 175-178
Talking Faces, Gestures, and Expressions
Invited Paper
- Jacqueline Leybaert:
The role of Cued Speech in language processing by deaf children : An overview. 179-186
Talking Faces and Evaluation
- Barry-John Theobald, J. Andrew Bangham, Iain A. Matthews, Gavin C. Cawley:
Evaluation of a talking head based on appearance models. 187-192 - Guillaume Vignali, Harold Hill, Eric Vatikiotis-Bateson:
Linking the structure and perception of 3D faces: Gender, ethnicity, and expressive posture. 193-198 - Michael Frydrych, Jari Kätsyri, Martin Dobsík, Mikko Sams:
Toolkit for animation of Finnish talking head. 199-204 - Catherine Siciliano, Andrew Faulkner, Geoff Williams:
Lipreadability of a synthetic talking face in normal hearing and hearing-impaired listeners. 205-208
Production and Coarticulation
- Emanuela Magno Caldognetto, Piero Cosi, Carlo Drioli, Graziano Tisato, Federica Cavicchio:
Coproduction of speech and emotions: visual and acoustic modifications of some phonetic labial targets#. 209-214 - Sascha Fagel, Caroline Clemens:
Two articulation models for audiovisual speech synthesis - description and determination. 215-220 - Elisabetta Bevacqua, Catherine Pelachaud:
Triphone-based coarticulation model. 221-226 - Virginie Attina, Denis Beautemps, Marie-Agnès Cathiard, Matthias Odisio:
Toward an audiovisual synthesizer for Cued Speech: Rules for CV French syllables. 227-232
Gestures and Expressions
- Magnus Nordstrand, Gunilla Svanfeldt, Björn Granström, David House:
Measurements of articulatory variation and communicative signals in expressive speech. 233-238 - Jari Kätsyri, Vasily Klucharev, Michael Frydrych, Mikko Sams:
Identification of synthetic and natural emotional facial expressions. 239-243 - Marion Dohen, Hélène Loevenbruck, Marie-Agnès Cathiard, Jean-Luc Schwartz:
Audiovisual perception of contrastive focus in French. 245-250 - Loredana Cerrato, Mustapha Skhiri:
A method for the analysis and measurement of communicative head movements in human dialogues. 251-256
Complementary Material
- Douglas M. Shiller, Christian Kroos, Eric Vatikiotis-Bateson, Kevin G. Munhall:
Exploring the spatial frequency requirements of audio-visual speech using superimposed facial motion. 257
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.