Datasets:

Modalities:
Image
Text
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
Ziruibest commited on
Commit
4902993
·
1 Parent(s): fa01099

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -18,15 +18,15 @@ BenchLMM is a benchmarking dataset focusing on the cross-style visual capability
18
  ### Dataset Description
19
 
20
  - **Curated by:** Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.
21
- - **Funded by [optional]:** Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
22
- - **Shared by [optional]:** AIFEG.
23
  - **Language(s) (NLP):** English.
24
  - **License:** Apache-2.0.
25
 
26
- ### Dataset Sources [optional]
27
 
28
  - **Repository:** [GitHub - AIFEG/BenchLMM](https://github.com/AIFEG/BenchLMM)
29
- - **Paper [optional]:** Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.
30
 
31
  ## Uses
32
 
@@ -58,7 +58,7 @@ The dataset consists of various visual questions and corresponding answers, stru
58
 
59
  Users should consider the specific visual contexts and question types included in the dataset when interpreting model performance.
60
 
61
- ## Citation [optional]
62
 
63
  **BibTeX:**
64
  @misc{cai2023benchlmm,
 
18
  ### Dataset Description
19
 
20
  - **Curated by:** Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.
21
+ - **Funded by :** Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
22
+ - **Shared by :** AIFEG.
23
  - **Language(s) (NLP):** English.
24
  - **License:** Apache-2.0.
25
 
26
+ ### Dataset Sources
27
 
28
  - **Repository:** [GitHub - AIFEG/BenchLMM](https://github.com/AIFEG/BenchLMM)
29
+ - **Paper :** Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.
30
 
31
  ## Uses
32
 
 
58
 
59
  Users should consider the specific visual contexts and question types included in the dataset when interpreting model performance.
60
 
61
+ ## Citation
62
 
63
  **BibTeX:**
64
  @misc{cai2023benchlmm,