iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://aclanthology.org/2021.findings-emnlp.93
Learning to Ground Visual Objects for Visual Dialog - ACL Anthology

Learning to Ground Visual Objects for Visual Dialog

Feilong Chen, Xiuyi Chen, Can Xu, Daxin Jiang


Abstract
Visual dialog is challenging since it needs to answer a series of coherent questions based on understanding the visual environment. How to ground related visual objects is one of the key problems. Previous studies utilize the question and history to attend to the image and achieve satisfactory performance, while these methods are not sufficient to locate related visual objects without any guidance. The inappropriate grounding of visual objects prohibits the performance of visual dialog models. In this paper, we propose a novel approach to Learn to Ground visual objects for visual dialog, which employs a novel visual objects grounding mechanism where both prior and posterior distributions over visual objects are used to facilitate visual objects grounding. Specifically, a posterior distribution over visual objects is inferred from both context (history and questions) and answers, and it ensures the appropriate grounding of visual objects during the training process. Meanwhile, a prior distribution, which is inferred from context only, is used to approximate the posterior distribution so that appropriate visual objects can be grounding even without answers during the inference process. Experimental results on the VisDial v0.9 and v1.0 datasets demonstrate that our approach improves the previous strong models in both generative and discriminative settings by a significant margin.
Anthology ID:
2021.findings-emnlp.93
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1081–1091
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.93
DOI:
10.18653/v1/2021.findings-emnlp.93
Bibkey:
Cite (ACL):
Feilong Chen, Xiuyi Chen, Can Xu, and Daxin Jiang. 2021. Learning to Ground Visual Objects for Visual Dialog. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1081–1091, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Learning to Ground Visual Objects for Visual Dialog (Chen et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.93.pdf
Data
VisDial