Dolphin3.0-R1-Mistral-24B-GGUF / scores /Dolphin3.0-R1-Mistral-24B-Q5_K_M.log
eaddario's picture
Generate perplexity and kld scores
1f75c4c
====== Perplexity statistics ======
Mean PPL(Q) : 24.304562 ± 0.233045
Mean PPL(base) : 23.352232 ± 0.220841
Cor(ln(PPL(Q)), ln(PPL(base))): 99.74%
Mean ln(PPL(Q)/PPL(base)) : 0.039971 ± 0.000693
Mean PPL(Q)/PPL(base) : 1.040781 ± 0.000721
Mean PPL(Q)-PPL(base) : 0.952329 ± 0.020290
====== KL divergence statistics ======
Mean KLD: 0.019001 ± 0.000084
Maximum KLD: 2.667997
99.9% KLD: 0.307100
99.0% KLD: 0.131595
99.0% KLD: 0.131595
Median KLD: 0.009005
10.0% KLD: 0.000235
5.0% KLD: 0.000046
1.0% KLD: -0.000033
Minimum KLD: -0.000534
====== Token probability statistics ======
Mean Δp: -0.033 ± 0.009 %
Maximum Δp: 53.497%
99.9% Δp: 21.400%
99.0% Δp: 11.091%
95.0% Δp: 4.986%
90.0% Δp: 2.607%
75.0% Δp: 0.323%
Median Δp: -0.000%
25.0% Δp: -0.333%
10.0% Δp: -2.621%
5.0% Δp: -5.142%
1.0% Δp: -11.765%
0.1% Δp: -23.329%
Minimum Δp: -88.396%
RMS Δp : 3.498 ± 0.021 %
Same top p: 93.797 ± 0.062 %
llama_perf_context_print: load time = 128672.98 ms
llama_perf_context_print: prompt eval time = 2206637.86 ms / 304128 tokens ( 7.26 ms per token, 137.82 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 2358244.53 ms / 304129 tokens