File size: 1,455 Bytes
1f75c4c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
====== Perplexity statistics ======
Mean PPL(Q) : 24.940671 ± 0.241913
Mean PPL(base) : 23.352232 ± 0.220841
Cor(ln(PPL(Q)), ln(PPL(base))): 99.60%
Mean ln(PPL(Q)/PPL(base)) : 0.065807 ± 0.000886
Mean PPL(Q)/PPL(base) : 1.068021 ± 0.000947
Mean PPL(Q)-PPL(base) : 1.588438 ± 0.029452
====== KL divergence statistics ======
Mean KLD: 0.030244 ± 0.000124
Maximum KLD: 2.276901
99.9% KLD: 0.467041
99.0% KLD: 0.202787
99.0% KLD: 0.202787
Median KLD: 0.014692
10.0% KLD: 0.000262
5.0% KLD: 0.000030
1.0% KLD: -0.000070
Minimum KLD: -0.000565
====== Token probability statistics ======
Mean Δp: 0.252 ± 0.011 %
Maximum Δp: 56.165%
99.9% Δp: 26.367%
99.0% Δp: 14.890%
95.0% Δp: 7.165%
90.0% Δp: 3.916%
75.0% Δp: 0.648%
Median Δp: 0.000%
25.0% Δp: -0.249%
10.0% Δp: -2.747%
5.0% Δp: -5.791%
1.0% Δp: -14.128%
0.1% Δp: -27.579%
Minimum Δp: -73.858%
RMS Δp : 4.403 ± 0.023 %
Same top p: 92.309 ± 0.068 %
llama_perf_context_print: load time = 288598.22 ms
llama_perf_context_print: prompt eval time = 1999513.57 ms / 304128 tokens ( 6.57 ms per token, 152.10 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 2559785.48 ms / 304129 tokens
|