V-Thinker / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Add pipeline tag, library name, links, and usage example
13e044d verified
|
raw
history blame
3.75 kB
metadata
license: mit
pipeline_tag: image-text-to-text
library_name: transformers

V-Thinker: Interactive Thinking with Images

V-Thinker Logo

This repository hosts the V-Thinker model, a general-purpose multimodal reasoning assistant that enables Interactive Thinking with Images, as presented in the paper V-Thinker: Interactive Thinking with Images.

V-Thinker focuses on integrating image interaction with long-horizon reasoning through an end-to-end reinforcement learning framework, comprising a Data Evolution Flywheel and a Visual Progressive Training Curriculum.

Associated Hugging Face Resources:

Abstract

Empowering Large Multimodal Models (LMMs) to deeply integrate image interaction with long-horizon reasoning capabilities remains a long-standing challenge in this field. Recent advances in vision-centric reasoning explore a promising “Thinking with Images” paradigm for LMMs, profoundly shifting from image-assisted reasoning to image-interactive thinking. While this milestone enables models to focus on fine-grained image regions, progress remains constrained by narrow visual tool spaces and task-specific workflow designs. To bridge this gap, we present V-Thinker, a general-purpose multimodal reasoning assistant that enables interactive, vision-centric thinking through end-to-end reinforcement learning. V-Thinker comprises two key components: (1) a Data Evolution Flywheel that automatically synthesizes, evolves, and verifies interactive reasoning datasets across three dimensions — diversity, quality, and difficulty; and (2) a Visual Progressive Training Curriculum that first aligns perception via point-level supervision, then integrates interactive reasoning through a two-stage reinforcement learning framework. Furthermore, we introduce VTBench, an expert-verified benchmark targeting vision-centric interactive reasoning tasks. Extensive experiments demonstrate that V-Thinker consistently outperforms strong LMM-based baselines in both general and interactive reasoning scenarios, providing valuable insights for advancing image-interactive reasoning applications.

Quick Start

The authors provide a simple script to run inference on custom cases:

cd ./eval/vtbench_IR
python inference.py

For more details on installation, training, and further inference examples, please refer to the official GitHub repository.

Citation

If you find V-Thinker useful for your research or applications, please cite the paper:

@misc{qiao2025vthinker,
      title={V-Thinker: Interactive Thinking with Images},
      author={Runqi Qiao and Qiuna Tan and Minghan Yang and Guanting Dong and Peiqing Yang and Shiqiang Lang and Enhui Wan and Xiaowan Wang and Yida Xu and Lan Yang and Chong Sun and Chen Li and Honggang Zhang},
      year={2025},
      eprint={2511.04460},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.04460},
}

License

This project is released under the MIT License.