File size: 1,455 Bytes
1f75c4c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
====== Perplexity statistics ======
Mean PPL(Q) : 24.550639 ± 0.235362
Mean PPL(base) : 23.352232 ± 0.220841
Cor(ln(PPL(Q)), ln(PPL(base))): 99.30%
Mean ln(PPL(Q)/PPL(base)) : 0.050045 ± 0.001135
Mean PPL(Q)/PPL(base) : 1.051319 ± 0.001194
Mean PPL(Q)-PPL(base) : 1.198406 ± 0.030666
====== KL divergence statistics ======
Mean KLD: 0.053089 ± 0.000213
Maximum KLD: 4.851488
99.9% KLD: 0.770215
99.0% KLD: 0.362664
99.0% KLD: 0.362664
Median KLD: 0.025806
10.0% KLD: 0.000577
5.0% KLD: 0.000120
1.0% KLD: -0.000018
Minimum KLD: -0.000444
====== Token probability statistics ======
Mean Δp: -0.115 ± 0.015 %
Maximum Δp: 82.459%
99.9% Δp: 33.908%
99.0% Δp: 18.275%
95.0% Δp: 8.313%
90.0% Δp: 4.302%
75.0% Δp: 0.536%
Median Δp: -0.000%
25.0% Δp: -0.557%
10.0% Δp: -4.600%
5.0% Δp: -8.973%
1.0% Δp: -20.298%
0.1% Δp: -38.592%
Minimum Δp: -79.558%
RMS Δp : 5.840 ± 0.030 %
Same top p: 89.673 ± 0.078 %
llama_perf_context_print: load time = 95100.93 ms
llama_perf_context_print: prompt eval time = 1924525.06 ms / 304128 tokens ( 6.33 ms per token, 158.03 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 2042841.27 ms / 304129 tokens
|