Dolphin3.0-R1-Mistral-24B-GGUF / scores /Dolphin3.0-R1-Mistral-24B-IQ4_NL.log
eaddario's picture
Generate perplexity and kld scores
1f75c4c
====== Perplexity statistics ======
Mean PPL(Q) : 23.491557 ± 0.221754
Mean PPL(base) : 23.352232 ± 0.220841
Cor(ln(PPL(Q)), ln(PPL(base))): 99.65%
Mean ln(PPL(Q)/PPL(base)) : 0.005948 ± 0.000786
Mean PPL(Q)/PPL(base) : 1.005966 ± 0.000791
Mean PPL(Q)-PPL(base) : 0.139324 ± 0.018431
====== KL divergence statistics ======
Mean KLD: 0.025519 ± 0.000102
Maximum KLD: 2.251977
99.9% KLD: 0.379603
99.0% KLD: 0.171840
99.0% KLD: 0.171840
Median KLD: 0.012414
10.0% KLD: 0.000339
5.0% KLD: 0.000079
1.0% KLD: -0.000015
Minimum KLD: -0.000684
====== Token probability statistics ======
Mean Δp: -0.210 ± 0.010 %
Maximum Δp: 56.116%
99.9% Δp: 23.081%
99.0% Δp: 12.242%
95.0% Δp: 5.439%
90.0% Δp: 2.734%
75.0% Δp: 0.282%
Median Δp: -0.000%
25.0% Δp: -0.493%
10.0% Δp: -3.448%
5.0% Δp: -6.500%
1.0% Δp: -14.293%
0.1% Δp: -27.100%
Minimum Δp: -67.311%
RMS Δp : 4.044 ± 0.022 %
Same top p: 92.678 ± 0.067 %
llama_perf_context_print: load time = 103365.78 ms
llama_perf_context_print: prompt eval time = 1896464.64 ms / 304128 tokens ( 6.24 ms per token, 160.37 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 1956813.06 ms / 304129 tokens