ViDoSeek / README.md
autumncc's picture
Add link to Github repository (#1)
6501a4e verified
metadata
language:
  - en
license: apache-2.0
task_categories:
  - question-answering
  - image-text-to-text
configs:
  - config_name: benchmark
    data_files:
      - split: VidoSeek
        path: vidoseek.json
      - split: SlideVQA_Refined
        path: slidevqa_refined.json

🚀Overview

This is the Repo for ViDoSeek, a benchmark specifically designed for visually rich document retrieval-reason-answer, fully suited for evaluation of RAG within large document corpus.

ViDoSeek sets itself apart with its heightened difficulty level, attributed to the multi-document context and the intricate nature of its content types, particularly the Layout category. The dataset contains both single-hop and multi-hop queries, presenting a diverse set of challenges. We have also released the SlideVQA-Refined dataset which is refined through our pipeline. This dataset is suitable for evaluating retrieval-augmented generation tasks as well.

ViDoSeek

🔍Dataset Format

The annotation is in the form of a JSON file.

{
    "uid": "04d8bb0db929110f204723c56e5386c1d8d21587_2",
    // Unique identifier to distinguish different queries
    "query": "What is the temperature of Steam explosion of Pretreatment for Switchgrass and Sugarcane bagasse preparation?", 
    // Query content
    "reference_answer": "195-205 Centigrade", 
    // Reference answer to the query
    "meta_info": {
        "file_name": "Pretreatment_of_Switchgrass.pdf", 
        // Original file name, typically a PDF file
        "reference_page": [10, 11], 
        // Reference page numbers represented as an array
        "source_type": "Text", 
        // Type of data source, 2d_layout\Text\Table\Chart
        "query_type": "Multi-Hop" 
        // Query type, Multi-Hop or Single-Hop
    }
}

📚 Download and Pre-Process

To use ViDoSeek, you need to download the document files vidoseek_pdf_document.zip and query annotations vidoseek.json.

Optionally, you can use the code we provide to process the dataset and perform inference. The process code is available at https://github.com/Alibaba-NLP/ViDoRAG/tree/main/scripts.

📝 Citation

If you find this dataset useful, please consider citing our paper:

@article{wang2025vidorag,
  title={ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents},
  author={Wang, Qiuchen and Ding, Ruixue and Chen, Zehui and Wu, Weiqi and Wang, Shihang and Xie, Pengjun and Zhao, Feng},
  journal={arXiv preprint arXiv:2502.18017},
  year={2025}
}