VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation

1University of Macau, 2CSIRO Data61

Abstract

Given the higher information load processed by large vision-language models (LVLMs) compared to single-modal LLMs, detecting LVLM hallucinations requires more human and time expense, and thus rise a wider safety concerns. In this paper, we introduce VL-Uncertainty, the first uncertainty-based framework for detecting hallucinations in LVLMs. Different from most existing methods that require ground-truth or pseudo annotations, VL-Uncertainty utilizes uncertainty as an intrinsic metric. We measure uncertainty by analyzing the prediction variance across semantically equivalent but perturbed prompts, including visual and textual data. When LVLMs are highly confident, they provide consistent responses to semantically equivalent queries. However, when uncertain, the responses of the target LVLM become more random. Considering semantically similar answers with different wordings, we cluster LVLM responses based on their semantic content and then calculate the cluster distribution entropy as the uncertainty measure to detect hallucination. Our extensive experiments on 10 LVLMs across four benchmarks, covering both free-form and multi-choice tasks, show that VL-Uncertainty significantly outperforms strong baseline methods in hallucination detection.

Motivation

Motivation. External evaluator-based methods usually suffer from knowledge missing when it comes to new domains (see (a)). In contrast, our VL-Uncertainty elicits intrinsic uncertainty of LVLM through proposed semantic-equivalent perturbation. Finally, refined uncertainty estimation facilitates reliable LVLM hallucination detection (see (b)).

Method

Overall illustration of our proposed VL-Uncertainty. To facilitate mining of uncertainty arising from various modalities, we apply semantic-equivalent perturbations (left) to both visual and textual prompts. For visual prompt, the original image is blurred to varying degrees, mimicking human visual perception. For textual prompt, pre-trained LLM is prompted to rephrase the original question in semantic-equivalent manner with different temperatures. Detailed instruction is designed to achieve question rephrasing with the original semantic preserved. Prompt pairs with varying degrees of perturbation are harnessed to effectively elicit LVLM uncertainty. We cluster LVLM answer set by semantic meaning and utilize entropy of answer cluster distribution as LVLM uncertainty (right). The estimated uncertainty serves as a continuous indicator of different levels of LVLM hallucination.

Comparison with State-of-the-arts

Comparison with state-of-the-arts on both free-form benchmark (MM-Vet and LLaVABench) and multi-choice benchmark (MMMU and ScienceQA) for LVLM hallucination detection. Our VL-Uncertainty yields significant improvements over strong baselines. This validates the efficacy of our proposed semantic-equivalent perturbation in eliciting and estimating LVLM uncertainty more accurately, which further facilitates LVLM hallucination detection. The reported results are hallucination detection accuracy. We re-implement semantic entropy within vision-language context.

Qualitative Analysis

Qualitative comparison between VL-Uncertainty and baselines. We present a sample from free-form benchmark. For this hallucinatory sample, pseudo-annotation-based method fails to interpret the hidden-behind logic and thus misses detecting hallucination (see (a)). On the other hand, for semantic-entropy, vanilla multi-sampling proves ineffective for mining LVLM uncertainty (see (b)). In contrast, our proposed semantic-equivalent perturbation on both visual and textual prompts successfully elicits LVLM uncertainty. This refined uncertainty estimation enhances the successful detection of LVLM hallucination (see (c)).

BibTeX


      @article{zhang2024vl,
        title={VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation},
        author={Zhang, Ruiyang and Zhang, Hu and Zheng, Zhedong},
        journal={arXiv preprint arXiv:2411.11919},
        year={2024}
      }