Poor performance in the leaderboard?

#17
by L29Ah - opened

Why does the model perform so poorly in the open LLM leaderboard, especially compared to the smaller https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
?

Also, don't you guys think it's hallucinating like crazy? I'm actually talking about 14B version, but they are similar. For example ask it "events in 1987". Compare it with gpt-4o-mini, (not even o1 mini). Result= 4o-mini gives events that are really from 1987, while r1-distilled barely gets one or two events correctly...

the 14B has no chat template thats why the math scores >0.0, the others on the leaderboard are with chat_template enabled so the score is affected.
@clefourrier do you think we could get the models evaluated without chat_template in the board or we still on the math eval turbulences?

cc @alozowski do you have the time to evaluate without chat template?

Hi everyone!

Here are the results without the chat template:

{'Model name': 'deepseek-ai/DeepSeek-R1-Distill-Qwen-32B',
 'Precision': 'torch.bfloat16',
 'Revision': 'd66bcfc2f3fd52799f95943264f32ba15ca0003d',
 'Average': 38.37,
 'IFEval': 40.16,
 'BBH': 46.1,
 'MATH Lvl 5': 46.68,
 'GPQA': 23.15,
 'MUSR': 25.16,
 'MMLU-PRO': 48.98}

The results on the Leaderboard are these:

{'Model name': 'deepseek-ai/DeepSeek-R1-Distill-Qwen-32B',
 'Precision': 'torch.bfloat16',
 'Revision': '4569fd730224ec487752bd4954399c6e18bf3aa6',
 'Average': 20.12,
 'IFEval': 41.86,
 'BBH': 17.15,
 'MATH Lvl 5': 0.0,
 'GPQA': 4.59,
 'MUSR': 16.14,
 'MMLU-PRO': 40.96}

It's true, that usually the chat template affects the scores, notably MATH Lvl 5

I think it would be fair to replace the results on the leaderboard by those (edit) without chat template :)

I think it would be fair to replace the results on the leaderboard by those with chat template :)

but the ones with chat_template enabled are providing very low score compared to the tests without chat_template.
.. none of them (with or without template) are actually reflecting the model true performance at all..

Edited to without, wrote too fast, good catch!

They reflect the performance of the model when evaluated in the exact same setup as the other models on the leaderboard.

Sign up or log in to comment