iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.18653/v1/2024.findings-acl.573
Visual Hallucinations of Multi-modal Large Language Models - ACL Anthology

Visual Hallucinations of Multi-modal Large Language Models

Wen Huang, Hongbin Liu, Minxin Guo, Neil Gong


Abstract
Visual hallucination (VH) means that a multi-modal LLM (MLLM) imagines incorrect details about an image in visual question answering. Existing studies find VH instances only in existing image datasets, which results in biased understanding of MLLMs’ performance under VH due to limited diversity of such VH instances. In this work, we propose a tool called VHTest to generate a diverse set of VH instances. Specifically, VHTest finds some initial VH instances in existing image datasets (e.g., COCO), generates a text description for each VH mode, and uses a text-to-image generative model (e.g., DALL-E-3) to generate VH images based on the text descriptions. We collect a benchmark dataset with 1,200 VH instances in 8 VH modes using VHTest. We find that existing MLLMs such as GPT-4, LLaVA-1.5, and MiniGPT-v2 hallucinate for a large fraction of the instances in our benchmark. Moreover, we find that fine-tuning an MLLM using our benchmark dataset reduces its likelihood to hallucinate without sacrificing its performance on other benchmarks. Our benchmarks are publicly available: https://github.com/wenhuang2000/VHTest.
Anthology ID:
2024.findings-acl.573
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9614–9631
Language:
URL:
https://aclanthology.org/2024.findings-acl.573
DOI:
10.18653/v1/2024.findings-acl.573
Bibkey:
Cite (ACL):
Wen Huang, Hongbin Liu, Minxin Guo, and Neil Gong. 2024. Visual Hallucinations of Multi-modal Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 9614–9631, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Visual Hallucinations of Multi-modal Large Language Models (Huang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.573.pdf