File size: 1,455 Bytes
1f75c4c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
====== Perplexity statistics ======
Mean PPL(Q) : 24.415348 ± 0.234385
Mean PPL(base) : 23.352232 ± 0.220841
Cor(ln(PPL(Q)), ln(PPL(base))): 99.79%
Mean ln(PPL(Q)/PPL(base)) : 0.044519 ± 0.000634
Mean PPL(Q)/PPL(base) : 1.045525 ± 0.000663
Mean PPL(Q)-PPL(base) : 1.063116 ± 0.020031
====== KL divergence statistics ======
Mean KLD: 0.015503 ± 0.000075
Maximum KLD: 3.272221
99.9% KLD: 0.267841
99.0% KLD: 0.107995
99.0% KLD: 0.107995
Median KLD: 0.007291
10.0% KLD: 0.000184
5.0% KLD: 0.000033
1.0% KLD: -0.000041
Minimum KLD: -0.000471
====== Token probability statistics ======
Mean Δp: -0.008 ± 0.008 %
Maximum Δp: 80.786%
99.9% Δp: 18.971%
99.0% Δp: 10.037%
95.0% Δp: 4.585%
90.0% Δp: 2.431%
75.0% Δp: 0.317%
Median Δp: -0.000%
25.0% Δp: -0.289%
10.0% Δp: -2.327%
5.0% Δp: -4.593%
1.0% Δp: -10.622%
0.1% Δp: -22.631%
Minimum Δp: -95.612%
RMS Δp : 3.201 ± 0.022 %
Same top p: 94.448 ± 0.059 %
llama_perf_context_print: load time = 134148.25 ms
llama_perf_context_print: prompt eval time = 2502976.05 ms / 304128 tokens ( 8.23 ms per token, 121.51 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 3009280.06 ms / 304129 tokens
|