Enhancing safety of vision-language reasoning through model-to-model deliberation

Research output: Contribution to journalArticlepeer-review

Abstract

Traditional vision-language models demonstrate strong performance in tasks such as image captioning and visual question answering, but they remain limited by issues such as hallucination, lack of self-correction, and shallow reasoning. These shortcomings compromise the safety, robustness, and consistency of their reasoning, particularly in ambiguous or high-stakes scenarios. In this paper, we propose three complementary frameworks aimed at enabling more trustworthy visual reasoning through structured deliberation. The first is the self-reflective reasoning single-agent framework, which facilitates iterative self-revision without requiring external supervision. The second is the structured debate agent framework, in which turn-based rebuttals between agents promote contrastive, multi-perspective refinement. The third is the progressive two-stage debate agent framework, which enables efficient yet accurate decision-making through model-to-model deliberation between smaller and larger agents. Experiments on the COCO dataset demonstrate that all three frameworks significantly enhance reasoning performance, achieving up to a 5.4% improvement in Intersection over Union (IoU) and over a 40% reduction in localization error compared to a single-pass baseline. Further evaluation across robustness (IoU), safety (self-revision rate, SRR), and consistency (consistency score, CS) confirms the effectiveness of multi-round, self-corrective, and multi-agent reasoning strategies. These results establish a practical path toward safer, more robust, and more interpretable vision-language models through lightweight, deliberative inference frameworks.

Original languageEnglish
Article number464
JournalComplex and Intelligent Systems
Volume11
Issue number11
DOIs
StatePublished - Nov 2025

Keywords

  • Debate
  • Object detection
  • Vision language model (VLM)
  • Vision reasoning
  • Visual question answering (VQA)

Fingerprint

Dive into the research topics of 'Enhancing safety of vision-language reasoning through model-to-model deliberation'. Together they form a unique fingerprint.

Cite this