|
Category Model Elo # Matches Win vs. Reference (w/ # ratings) |
|
Single Image Human Verified Reference 1382 5880 --- |
|
Single Image LLaVA-Plus (13B) π₯ 1203 678 35.07% (n=134) |
|
Single Image LLaVA (13B) π₯ 1095 5420 18.53% (n=475) |
|
Single Image mPLUG-Owl π₯ 1087 5440 15.83% (n=480) |
|
Single Image LlamaAdapter-v2 1066 5469 14.14% (n=488) |
|
Single Image Lynx(8B) 1037 787 11.43% (n=140) |
|
Single Image idefics (9B) 1020 794 9.72% (n=144) |
|
Single Image InstructBLIP 1000 5469 14.12% (n=503) |
|
Single Image Otter 962 5443 7.01% (n=499) |
|
Single Image Visual Gpt (Davinci003) 941 5437 1.57% (n=510) |
|
Single Image MiniGPT-4 926 5448 3.36% (n=506) |
|
Single Image Octopus V2 925 790 8.90% (n=146) |
|
Single Image OpenFlamingo V1 851 5479 2.95% (n=509) |
|
Single Image PandaGPT (13B) 775 5465 2.70% (n=519) |
|
Single Image MultimodalGPT 731 5471 0.19% (n=527) |
|
|