File size: 623 Bytes
9d294a8
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
ALIGN-BENCH is developed for measuring cross-modal alignment of vision-language models quantitatively.

The code can be found at https://github.com/IIGROUP/SCL.

The core idea is to utilize the cross-attention maps of the last layer in fusion encoder, and compare them with annotated regions corresponding to some words. 

ALIGN-BENCH can calculate global-local and local-local alignment scores from two angles, bounding box and pixel mask.

There are 1,500 images and 1,500 annotation files in the dataset zip file. The annotatino file contains a caption and some words' regions (bounding box and pixel mask) on the image.